Test Report: KVM_Linux_crio 17830

                    
                      f2d99d5d3acbee63fb92e6e0c0b75bbff35f3ad4:2024-01-09:32615
                    
                

Test fail (26/306)

Order failed test Duration
34 TestAddons/parallel/Registry 19.77
35 TestAddons/parallel/Ingress 155.82
49 TestAddons/StoppedEnableDisable 155.43
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.02
213 TestMultiNode/serial/PingHostFrom2Pods 3.25
220 TestMultiNode/serial/RestartKeepsNodes 691.61
222 TestMultiNode/serial/StopMultiNode 143.55
229 TestPreload 226.42
235 TestRunningBinaryUpgrade 142.34
243 TestStoppedBinaryUpgrade/Upgrade 306.71
335 TestStartStop/group/old-k8s-version/serial/Stop 140.22
337 TestStartStop/group/embed-certs/serial/Stop 139.57
340 TestStartStop/group/no-preload/serial/Stop 139.58
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.17
344 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.41
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.58
353 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.49
354 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.54
355 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.43
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 435.18
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 168.72
358 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 129.33
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 42.63
x
+
TestAddons/parallel/Registry (19.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 33.921903ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5phsw" [886a9630-22c3-4d03-b42f-b2c1186c7c19] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006675984s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-br7js" [770ce618-3a9f-47a5-9070-e7364b2a564a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006210591s
addons_test.go:340: (dbg) Run:  kubectl --context addons-910124 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-910124 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-910124 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.293368705s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 ip
2024/01/08 22:56:05 [DEBUG] GET http://192.168.39.129:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910124 addons disable registry --alsologtostderr -v=1: exit status 11 (504.80291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:56:05.196161  408873 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:56:05.196349  408873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:56:05.196362  408873 out.go:309] Setting ErrFile to fd 2...
	I0108 22:56:05.196369  408873 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:56:05.196593  408873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 22:56:05.196884  408873 mustload.go:65] Loading cluster: addons-910124
	I0108 22:56:05.197304  408873 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:56:05.197335  408873 addons.go:600] checking whether the cluster is paused
	I0108 22:56:05.197448  408873 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:56:05.197476  408873 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:56:05.197934  408873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:56:05.198007  408873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:56:05.214070  408873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I0108 22:56:05.214745  408873 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:56:05.215555  408873 main.go:141] libmachine: Using API Version  1
	I0108 22:56:05.215585  408873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:56:05.216025  408873 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:56:05.216285  408873 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:56:05.218626  408873 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:56:05.218978  408873 ssh_runner.go:195] Run: systemctl --version
	I0108 22:56:05.219018  408873 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:56:05.221773  408873 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:56:05.222331  408873 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:56:05.222375  408873 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:56:05.222508  408873 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:56:05.222741  408873 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:56:05.222941  408873 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:56:05.223106  408873 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:56:05.366035  408873 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:56:05.366111  408873 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:56:05.489694  408873 cri.go:89] found id: "d63ce05bf444dc65a4457eb0a0093c6de21a1fc0e607cae8edcb22bfec0d3dcd"
	I0108 22:56:05.489735  408873 cri.go:89] found id: "4900f098ebffaef5817058bf97d18f4118959d7734e9a0ce38e9ffa968c23d9a"
	I0108 22:56:05.489744  408873 cri.go:89] found id: "52fe595d19340ca045ea1821d7ef0f91ed6816ac5ebc3930c37f27784443391f"
	I0108 22:56:05.489750  408873 cri.go:89] found id: "6ad4963f73159aa1b00621f57e7df540b37124bd3fc9ad269d22d86a2cb6003c"
	I0108 22:56:05.489756  408873 cri.go:89] found id: "6825938a883ca0d69bb236a09daf8c5b31dffb48a5ac3771bcc984a73668c59a"
	I0108 22:56:05.489766  408873 cri.go:89] found id: "1d86be379877e9a08ac081cefe10602e6e96aedfa7e81db287da93f3b16bd8e3"
	I0108 22:56:05.489775  408873 cri.go:89] found id: "5557a617adeb561622ddecd9b321f41b77b33e532c6a5c5219ee7d5d68cdc54e"
	I0108 22:56:05.489781  408873 cri.go:89] found id: "02839fcac5ca6c3856d544c4658365f3d400a9ac66034a71a95d624e3c615e62"
	I0108 22:56:05.489787  408873 cri.go:89] found id: "4d29b343574ca3793bbb556bb7b113a687c7d722fc7667edc7d56da773f7796c"
	I0108 22:56:05.489798  408873 cri.go:89] found id: "b06f34b50d399d1d4a619dbe2bd86d31b94fb8cc09d34f3384b0694d69a5caf1"
	I0108 22:56:05.489802  408873 cri.go:89] found id: "77779480dad8559502a544d1f41828f129351da41a867edb475046541bde1e52"
	I0108 22:56:05.489805  408873 cri.go:89] found id: "d335b3cfb0835b4edc1ee00b4ca8778961b740f6d90d988ac28e635ea65ece19"
	I0108 22:56:05.489820  408873 cri.go:89] found id: "f207408f227421da3cd414bc178d53a5b41db6f02223ef536728b2c2e47836e5"
	I0108 22:56:05.489837  408873 cri.go:89] found id: "1919095aac1ff7b608936ab8a28482e7b6da3ecdd03c191becc7b1faacca2b7a"
	I0108 22:56:05.489846  408873 cri.go:89] found id: "41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5"
	I0108 22:56:05.489854  408873 cri.go:89] found id: "f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465"
	I0108 22:56:05.489860  408873 cri.go:89] found id: "1558990c8dd707a0beaec89ba9e6656c8f0c85cc85fc81c0508d7795a77d34cf"
	I0108 22:56:05.489873  408873 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:56:05.489883  408873 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:56:05.489889  408873 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:56:05.489893  408873 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:56:05.489902  408873 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:56:05.489912  408873 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:56:05.489918  408873 cri.go:89] found id: ""
	I0108 22:56:05.490011  408873 ssh_runner.go:195] Run: sudo runc list -f json
	I0108 22:56:05.621092  408873 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:05.621121  408873 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:56:05.621439  408873 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:05.621463  408873 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:05.621497  408873 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:56:05.624181  408873 out.go:177] 
	W0108 22:56:05.626015  408873 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-08T22:56:05Z" level=error msg="stat /run/runc/9640e599e7dc05bf05bb5b74a73f9ef578c9834bbc019f7c34902c09bad052b8: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-08T22:56:05Z" level=error msg="stat /run/runc/9640e599e7dc05bf05bb5b74a73f9ef578c9834bbc019f7c34902c09bad052b8: no such file or directory"
	
	W0108 22:56:05.626047  408873 out.go:239] * 
	* 
	W0108 22:56:05.629261  408873 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 22:56:05.630759  408873 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:390: failed to disable registry addon. args "out/minikube-linux-amd64 -p addons-910124 addons disable registry --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-910124 -n addons-910124
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 logs -n 25: (2.432843518s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |                     |
	|         | -p download-only-138294              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |                     |
	|         | -p download-only-138294              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | -p download-only-138294              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| delete  | -p download-only-138294              | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| delete  | -p download-only-138294              | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| start   | --download-only -p                   | binary-mirror-576323 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | binary-mirror-576323                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42563               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-576323              | binary-mirror-576323 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| addons  | enable dashboard -p                  | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | addons-910124                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | addons-910124                        |                      |         |         |                     |                     |
	| start   | -p addons-910124 --wait=true         | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:55 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         |  --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:55 UTC | 08 Jan 24 22:55 UTC |
	|         | -p addons-910124                     |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | addons-910124                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | -p addons-910124                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-910124 ip                     | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	| addons  | addons-910124 addons disable         | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC |                     |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:52:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:52:13.382889  407512 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:52:13.383046  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:13.383052  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:52:13.383056  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:13.383252  407512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 22:52:13.383931  407512 out.go:303] Setting JSON to false
	I0108 22:52:13.384848  407512 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12859,"bootTime":1704741474,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:52:13.384973  407512 start.go:138] virtualization: kvm guest
	I0108 22:52:13.387433  407512 out.go:177] * [addons-910124] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:52:13.389066  407512 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 22:52:13.390347  407512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:52:13.389147  407512 notify.go:220] Checking for updates...
	I0108 22:52:13.391854  407512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:52:13.393313  407512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:13.394637  407512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:52:13.395949  407512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:52:13.397472  407512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:52:13.434904  407512 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:52:13.436211  407512 start.go:298] selected driver: kvm2
	I0108 22:52:13.436234  407512 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:52:13.436250  407512 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:52:13.437005  407512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:13.437103  407512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:52:13.454531  407512 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:52:13.454588  407512 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:52:13.454846  407512 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:52:13.454928  407512 cni.go:84] Creating CNI manager for ""
	I0108 22:52:13.454942  407512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:52:13.454952  407512 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:52:13.454973  407512 start_flags.go:323] config:
	{Name:addons-910124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:13.455474  407512 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:13.457737  407512 out.go:177] * Starting control plane node addons-910124 in cluster addons-910124
	I0108 22:52:13.459865  407512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:13.459927  407512 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:52:13.459943  407512 cache.go:56] Caching tarball of preloaded images
	I0108 22:52:13.460053  407512 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:52:13.460065  407512 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:52:13.460482  407512 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/config.json ...
	I0108 22:52:13.460516  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/config.json: {Name:mkb106a6a83962c00c178961d9c58cf64f36e4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:13.460693  407512 start.go:365] acquiring machines lock for addons-910124: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:52:13.460754  407512 start.go:369] acquired machines lock for "addons-910124" in 43.497µs
	I0108 22:52:13.460781  407512 start.go:93] Provisioning new machine with config: &{Name:addons-910124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:52:13.460883  407512 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:52:13.463549  407512 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 22:52:13.463742  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:52:13.463778  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:52:13.479313  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0108 22:52:13.480214  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:52:13.481142  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:52:13.481168  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:52:13.481896  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:52:13.482128  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:13.482314  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:13.482491  407512 start.go:159] libmachine.API.Create for "addons-910124" (driver="kvm2")
	I0108 22:52:13.482534  407512 client.go:168] LocalClient.Create starting
	I0108 22:52:13.482586  407512 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0108 22:52:13.742583  407512 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0108 22:52:13.886416  407512 main.go:141] libmachine: Running pre-create checks...
	I0108 22:52:13.886452  407512 main.go:141] libmachine: (addons-910124) Calling .PreCreateCheck
	I0108 22:52:13.887083  407512 main.go:141] libmachine: (addons-910124) Calling .GetConfigRaw
	I0108 22:52:13.887782  407512 main.go:141] libmachine: Creating machine...
	I0108 22:52:13.887806  407512 main.go:141] libmachine: (addons-910124) Calling .Create
	I0108 22:52:13.888019  407512 main.go:141] libmachine: (addons-910124) Creating KVM machine...
	I0108 22:52:13.889641  407512 main.go:141] libmachine: (addons-910124) DBG | found existing default KVM network
	I0108 22:52:13.890775  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:13.890469  407534 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a50}
	I0108 22:52:13.896745  407512 main.go:141] libmachine: (addons-910124) DBG | trying to create private KVM network mk-addons-910124 192.168.39.0/24...
	I0108 22:52:13.985496  407512 main.go:141] libmachine: (addons-910124) DBG | private KVM network mk-addons-910124 192.168.39.0/24 created
	I0108 22:52:13.985547  407512 main.go:141] libmachine: (addons-910124) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124 ...
	I0108 22:52:13.985563  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:13.985450  407534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:13.985582  407512 main.go:141] libmachine: (addons-910124) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 22:52:13.985692  407512 main.go:141] libmachine: (addons-910124) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 22:52:14.239725  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:14.239579  407534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa...
	I0108 22:52:14.297213  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:14.297082  407534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/addons-910124.rawdisk...
	I0108 22:52:14.297262  407512 main.go:141] libmachine: (addons-910124) DBG | Writing magic tar header
	I0108 22:52:14.297279  407512 main.go:141] libmachine: (addons-910124) DBG | Writing SSH key tar header
	I0108 22:52:14.297340  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:14.297276  407534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124 ...
	I0108 22:52:14.297378  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124
	I0108 22:52:14.297403  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0108 22:52:14.297435  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124 (perms=drwx------)
	I0108 22:52:14.297457  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:14.297472  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0108 22:52:14.297484  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:52:14.297495  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:52:14.297506  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home
	I0108 22:52:14.297580  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:52:14.297616  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0108 22:52:14.297631  407512 main.go:141] libmachine: (addons-910124) DBG | Skipping /home - not owner
	I0108 22:52:14.297647  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0108 22:52:14.297659  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:52:14.297672  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:52:14.297684  407512 main.go:141] libmachine: (addons-910124) Creating domain...
	I0108 22:52:14.298759  407512 main.go:141] libmachine: (addons-910124) define libvirt domain using xml: 
	I0108 22:52:14.298793  407512 main.go:141] libmachine: (addons-910124) <domain type='kvm'>
	I0108 22:52:14.298802  407512 main.go:141] libmachine: (addons-910124)   <name>addons-910124</name>
	I0108 22:52:14.298808  407512 main.go:141] libmachine: (addons-910124)   <memory unit='MiB'>4000</memory>
	I0108 22:52:14.298814  407512 main.go:141] libmachine: (addons-910124)   <vcpu>2</vcpu>
	I0108 22:52:14.298820  407512 main.go:141] libmachine: (addons-910124)   <features>
	I0108 22:52:14.298825  407512 main.go:141] libmachine: (addons-910124)     <acpi/>
	I0108 22:52:14.298831  407512 main.go:141] libmachine: (addons-910124)     <apic/>
	I0108 22:52:14.298841  407512 main.go:141] libmachine: (addons-910124)     <pae/>
	I0108 22:52:14.298849  407512 main.go:141] libmachine: (addons-910124)     
	I0108 22:52:14.298884  407512 main.go:141] libmachine: (addons-910124)   </features>
	I0108 22:52:14.298904  407512 main.go:141] libmachine: (addons-910124)   <cpu mode='host-passthrough'>
	I0108 22:52:14.298910  407512 main.go:141] libmachine: (addons-910124)   
	I0108 22:52:14.298915  407512 main.go:141] libmachine: (addons-910124)   </cpu>
	I0108 22:52:14.298925  407512 main.go:141] libmachine: (addons-910124)   <os>
	I0108 22:52:14.298952  407512 main.go:141] libmachine: (addons-910124)     <type>hvm</type>
	I0108 22:52:14.298965  407512 main.go:141] libmachine: (addons-910124)     <boot dev='cdrom'/>
	I0108 22:52:14.298972  407512 main.go:141] libmachine: (addons-910124)     <boot dev='hd'/>
	I0108 22:52:14.298979  407512 main.go:141] libmachine: (addons-910124)     <bootmenu enable='no'/>
	I0108 22:52:14.298986  407512 main.go:141] libmachine: (addons-910124)   </os>
	I0108 22:52:14.298992  407512 main.go:141] libmachine: (addons-910124)   <devices>
	I0108 22:52:14.298999  407512 main.go:141] libmachine: (addons-910124)     <disk type='file' device='cdrom'>
	I0108 22:52:14.299043  407512 main.go:141] libmachine: (addons-910124)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/boot2docker.iso'/>
	I0108 22:52:14.299072  407512 main.go:141] libmachine: (addons-910124)       <target dev='hdc' bus='scsi'/>
	I0108 22:52:14.299085  407512 main.go:141] libmachine: (addons-910124)       <readonly/>
	I0108 22:52:14.299093  407512 main.go:141] libmachine: (addons-910124)     </disk>
	I0108 22:52:14.299105  407512 main.go:141] libmachine: (addons-910124)     <disk type='file' device='disk'>
	I0108 22:52:14.299111  407512 main.go:141] libmachine: (addons-910124)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:52:14.299123  407512 main.go:141] libmachine: (addons-910124)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/addons-910124.rawdisk'/>
	I0108 22:52:14.299131  407512 main.go:141] libmachine: (addons-910124)       <target dev='hda' bus='virtio'/>
	I0108 22:52:14.299141  407512 main.go:141] libmachine: (addons-910124)     </disk>
	I0108 22:52:14.299153  407512 main.go:141] libmachine: (addons-910124)     <interface type='network'>
	I0108 22:52:14.299166  407512 main.go:141] libmachine: (addons-910124)       <source network='mk-addons-910124'/>
	I0108 22:52:14.299180  407512 main.go:141] libmachine: (addons-910124)       <model type='virtio'/>
	I0108 22:52:14.299194  407512 main.go:141] libmachine: (addons-910124)     </interface>
	I0108 22:52:14.299212  407512 main.go:141] libmachine: (addons-910124)     <interface type='network'>
	I0108 22:52:14.299222  407512 main.go:141] libmachine: (addons-910124)       <source network='default'/>
	I0108 22:52:14.299231  407512 main.go:141] libmachine: (addons-910124)       <model type='virtio'/>
	I0108 22:52:14.299239  407512 main.go:141] libmachine: (addons-910124)     </interface>
	I0108 22:52:14.299245  407512 main.go:141] libmachine: (addons-910124)     <serial type='pty'>
	I0108 22:52:14.299253  407512 main.go:141] libmachine: (addons-910124)       <target port='0'/>
	I0108 22:52:14.299259  407512 main.go:141] libmachine: (addons-910124)     </serial>
	I0108 22:52:14.299265  407512 main.go:141] libmachine: (addons-910124)     <console type='pty'>
	I0108 22:52:14.299271  407512 main.go:141] libmachine: (addons-910124)       <target type='serial' port='0'/>
	I0108 22:52:14.299280  407512 main.go:141] libmachine: (addons-910124)     </console>
	I0108 22:52:14.299286  407512 main.go:141] libmachine: (addons-910124)     <rng model='virtio'>
	I0108 22:52:14.299297  407512 main.go:141] libmachine: (addons-910124)       <backend model='random'>/dev/random</backend>
	I0108 22:52:14.299304  407512 main.go:141] libmachine: (addons-910124)     </rng>
	I0108 22:52:14.299310  407512 main.go:141] libmachine: (addons-910124)     
	I0108 22:52:14.299318  407512 main.go:141] libmachine: (addons-910124)     
	I0108 22:52:14.299326  407512 main.go:141] libmachine: (addons-910124)   </devices>
	I0108 22:52:14.299334  407512 main.go:141] libmachine: (addons-910124) </domain>
	I0108 22:52:14.299340  407512 main.go:141] libmachine: (addons-910124) 
	I0108 22:52:14.304243  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:7a:22:34 in network default
	I0108 22:52:14.304927  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:14.304956  407512 main.go:141] libmachine: (addons-910124) Ensuring networks are active...
	I0108 22:52:14.305798  407512 main.go:141] libmachine: (addons-910124) Ensuring network default is active
	I0108 22:52:14.306110  407512 main.go:141] libmachine: (addons-910124) Ensuring network mk-addons-910124 is active
	I0108 22:52:14.306740  407512 main.go:141] libmachine: (addons-910124) Getting domain xml...
	I0108 22:52:14.307605  407512 main.go:141] libmachine: (addons-910124) Creating domain...
	I0108 22:52:15.621820  407512 main.go:141] libmachine: (addons-910124) Waiting to get IP...
	I0108 22:52:15.622779  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:15.623348  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:15.623515  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:15.623409  407534 retry.go:31] will retry after 256.402572ms: waiting for machine to come up
	I0108 22:52:15.882358  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:15.882860  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:15.882921  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:15.882778  407534 retry.go:31] will retry after 252.502976ms: waiting for machine to come up
	I0108 22:52:16.137292  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:16.137802  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:16.137839  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:16.137754  407534 retry.go:31] will retry after 420.002938ms: waiting for machine to come up
	I0108 22:52:16.559696  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:16.560282  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:16.560306  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:16.560235  407534 retry.go:31] will retry after 519.129626ms: waiting for machine to come up
	I0108 22:52:17.081041  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:17.081498  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:17.081544  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:17.081438  407534 retry.go:31] will retry after 549.375377ms: waiting for machine to come up
	I0108 22:52:17.632182  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:17.632635  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:17.632669  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:17.632581  407534 retry.go:31] will retry after 879.065742ms: waiting for machine to come up
	I0108 22:52:18.513659  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:18.514091  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:18.514124  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:18.514029  407534 retry.go:31] will retry after 1.024749708s: waiting for machine to come up
	I0108 22:52:19.540306  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:19.540799  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:19.540827  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:19.540726  407534 retry.go:31] will retry after 1.043170144s: waiting for machine to come up
	I0108 22:52:20.586073  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:20.586468  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:20.586501  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:20.586424  407534 retry.go:31] will retry after 1.66659817s: waiting for machine to come up
	I0108 22:52:22.255467  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:22.255943  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:22.255975  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:22.255894  407534 retry.go:31] will retry after 2.251236752s: waiting for machine to come up
	I0108 22:52:24.508972  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:24.509574  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:24.509674  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:24.509550  407534 retry.go:31] will retry after 2.167195426s: waiting for machine to come up
	I0108 22:52:26.680245  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:26.680801  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:26.680826  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:26.680769  407534 retry.go:31] will retry after 2.992105106s: waiting for machine to come up
	I0108 22:52:29.674597  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:29.675001  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:29.675033  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:29.674943  407534 retry.go:31] will retry after 2.737710522s: waiting for machine to come up
	I0108 22:52:32.416139  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:32.416574  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:32.416602  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:32.416526  407534 retry.go:31] will retry after 3.984236982s: waiting for machine to come up
	I0108 22:52:36.405098  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.405690  407512 main.go:141] libmachine: (addons-910124) Found IP for machine: 192.168.39.129
	I0108 22:52:36.405722  407512 main.go:141] libmachine: (addons-910124) Reserving static IP address...
	I0108 22:52:36.405742  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has current primary IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.406123  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find host DHCP lease matching {name: "addons-910124", mac: "52:54:00:c1:ef:95", ip: "192.168.39.129"} in network mk-addons-910124
	I0108 22:52:36.497600  407512 main.go:141] libmachine: (addons-910124) DBG | Getting to WaitForSSH function...
	I0108 22:52:36.497634  407512 main.go:141] libmachine: (addons-910124) Reserved static IP address: 192.168.39.129
	I0108 22:52:36.497646  407512 main.go:141] libmachine: (addons-910124) Waiting for SSH to be available...
	I0108 22:52:36.500389  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.500794  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.500823  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.501020  407512 main.go:141] libmachine: (addons-910124) DBG | Using SSH client type: external
	I0108 22:52:36.501045  407512 main.go:141] libmachine: (addons-910124) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa (-rw-------)
	I0108 22:52:36.501138  407512 main.go:141] libmachine: (addons-910124) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:52:36.501168  407512 main.go:141] libmachine: (addons-910124) DBG | About to run SSH command:
	I0108 22:52:36.501207  407512 main.go:141] libmachine: (addons-910124) DBG | exit 0
	I0108 22:52:36.595647  407512 main.go:141] libmachine: (addons-910124) DBG | SSH cmd err, output: <nil>: 
	I0108 22:52:36.595896  407512 main.go:141] libmachine: (addons-910124) KVM machine creation complete!
	I0108 22:52:36.596279  407512 main.go:141] libmachine: (addons-910124) Calling .GetConfigRaw
	I0108 22:52:36.596868  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:36.597059  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:36.597254  407512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 22:52:36.597270  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:52:36.598513  407512 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 22:52:36.598529  407512 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 22:52:36.598535  407512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 22:52:36.598542  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.600742  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.601039  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.601078  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.601202  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.601406  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.601556  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.601735  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.601900  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.602326  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.602345  407512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 22:52:36.723588  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:52:36.723625  407512 main.go:141] libmachine: Detecting the provisioner...
	I0108 22:52:36.723642  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.727345  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.727786  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.727829  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.728015  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.728313  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.728511  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.728690  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.728881  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.729212  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.729237  407512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 22:52:36.852656  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 22:52:36.852847  407512 main.go:141] libmachine: found compatible host: buildroot
	I0108 22:52:36.852866  407512 main.go:141] libmachine: Provisioning with buildroot...
	I0108 22:52:36.852881  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:36.853216  407512 buildroot.go:166] provisioning hostname "addons-910124"
	I0108 22:52:36.853252  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:36.853512  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.856350  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.856840  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.856871  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.857092  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.857307  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.857486  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.857644  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.857885  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.858283  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.858303  407512 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-910124 && echo "addons-910124" | sudo tee /etc/hostname
	I0108 22:52:36.993416  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-910124
	
	I0108 22:52:36.993447  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.996414  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.996799  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.996828  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.996997  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.997211  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.997401  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.997557  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.997729  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.998054  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.998071  407512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-910124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-910124/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-910124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:52:37.129274  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:52:37.129312  407512 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 22:52:37.129372  407512 buildroot.go:174] setting up certificates
	I0108 22:52:37.129426  407512 provision.go:83] configureAuth start
	I0108 22:52:37.129445  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:37.129770  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:37.132685  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.133007  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.133051  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.133253  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.135245  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.135553  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.135600  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.135670  407512 provision.go:138] copyHostCerts
	I0108 22:52:37.135752  407512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 22:52:37.135896  407512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 22:52:37.135973  407512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 22:52:37.136032  407512 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.addons-910124 san=[192.168.39.129 192.168.39.129 localhost 127.0.0.1 minikube addons-910124]
	I0108 22:52:37.250234  407512 provision.go:172] copyRemoteCerts
	I0108 22:52:37.250309  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:52:37.250364  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.253506  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.253921  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.253954  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.254129  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.254335  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.254483  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.254642  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:37.346000  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 22:52:37.371069  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 22:52:37.398153  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:52:37.423117  407512 provision.go:86] duration metric: configureAuth took 293.672688ms
	I0108 22:52:37.423160  407512 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:52:37.423426  407512 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:52:37.423543  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.426787  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.427150  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.427207  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.427386  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.427660  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.427872  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.428023  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.428230  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:37.428609  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:37.428625  407512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:52:37.783403  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:52:37.783443  407512 main.go:141] libmachine: Checking connection to Docker...
	I0108 22:52:37.783479  407512 main.go:141] libmachine: (addons-910124) Calling .GetURL
	I0108 22:52:37.784951  407512 main.go:141] libmachine: (addons-910124) DBG | Using libvirt version 6000000
	I0108 22:52:37.787481  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.787789  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.787825  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.788004  407512 main.go:141] libmachine: Docker is up and running!
	I0108 22:52:37.788024  407512 main.go:141] libmachine: Reticulating splines...
	I0108 22:52:37.788033  407512 client.go:171] LocalClient.Create took 24.305487314s
	I0108 22:52:37.788078  407512 start.go:167] duration metric: libmachine.API.Create for "addons-910124" took 24.30557735s
	I0108 22:52:37.788139  407512 start.go:300] post-start starting for "addons-910124" (driver="kvm2")
	I0108 22:52:37.788157  407512 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:52:37.788182  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:37.788459  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:52:37.788486  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.790948  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.791563  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.791599  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.791849  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.792137  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.792330  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.792517  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:37.883513  407512 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:52:37.888256  407512 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:52:37.888284  407512 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 22:52:37.888352  407512 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 22:52:37.888373  407512 start.go:303] post-start completed in 100.224684ms
	I0108 22:52:37.888410  407512 main.go:141] libmachine: (addons-910124) Calling .GetConfigRaw
	I0108 22:52:37.889073  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:37.893517  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.894060  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.894096  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.894391  407512 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/config.json ...
	I0108 22:52:37.894644  407512 start.go:128] duration metric: createHost completed in 24.433746515s
	I0108 22:52:37.894710  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.897454  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.897831  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.897894  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.898034  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.898285  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.898487  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.898634  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.898812  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:37.899137  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:37.899149  407512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:52:38.025173  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704754358.001128464
	
	I0108 22:52:38.025221  407512 fix.go:206] guest clock: 1704754358.001128464
	I0108 22:52:38.025230  407512 fix.go:219] Guest: 2024-01-08 22:52:38.001128464 +0000 UTC Remote: 2024-01-08 22:52:37.894686839 +0000 UTC m=+24.567921542 (delta=106.441625ms)
	I0108 22:52:38.025254  407512 fix.go:190] guest clock delta is within tolerance: 106.441625ms
	I0108 22:52:38.025259  407512 start.go:83] releasing machines lock for "addons-910124", held for 24.564492803s
	I0108 22:52:38.025282  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.025599  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:38.028385  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.028767  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:38.028790  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.029018  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.029649  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.029829  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.029931  407512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:52:38.029988  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:38.030120  407512 ssh_runner.go:195] Run: cat /version.json
	I0108 22:52:38.030153  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:38.032932  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033177  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033245  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:38.033294  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033433  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:38.033637  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:38.033645  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:38.033667  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033853  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:38.033873  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:38.034017  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:38.034111  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:38.034199  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:38.034333  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:38.146774  407512 ssh_runner.go:195] Run: systemctl --version
	I0108 22:52:38.152950  407512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:52:38.319094  407512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:52:38.326823  407512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:52:38.326942  407512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:52:38.344744  407512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:52:38.344780  407512 start.go:475] detecting cgroup driver to use...
	I0108 22:52:38.344944  407512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:52:38.361973  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:52:38.375713  407512 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:52:38.375796  407512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:52:38.389464  407512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:52:38.404281  407512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:52:38.518754  407512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:52:38.651253  407512 docker.go:219] disabling docker service ...
	I0108 22:52:38.651349  407512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:52:38.668109  407512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:52:38.682602  407512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:52:38.802440  407512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:52:38.921494  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:52:38.936934  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:52:38.957465  407512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:52:38.957536  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:38.969247  407512 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:52:38.969324  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:38.981002  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:38.994186  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:39.005490  407512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:52:39.019297  407512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:52:39.030284  407512 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:52:39.030361  407512 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:52:39.045004  407512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:52:39.057207  407512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:52:39.172047  407512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:52:39.372313  407512 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:52:39.372432  407512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:52:39.378246  407512 start.go:543] Will wait 60s for crictl version
	I0108 22:52:39.378392  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:52:39.383199  407512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:52:39.433246  407512 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:52:39.433363  407512 ssh_runner.go:195] Run: crio --version
	I0108 22:52:39.487014  407512 ssh_runner.go:195] Run: crio --version
	I0108 22:52:39.536292  407512 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:52:39.538122  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:39.541261  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:39.541786  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:39.541816  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:39.542236  407512 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:52:39.547286  407512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:52:39.562583  407512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:39.562681  407512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:52:39.602440  407512 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:52:39.602529  407512 ssh_runner.go:195] Run: which lz4
	I0108 22:52:39.606570  407512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:52:39.611132  407512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:52:39.611181  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:52:41.678449  407512 crio.go:444] Took 2.071916 seconds to copy over tarball
	I0108 22:52:41.678588  407512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:52:45.073822  407512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.395188593s)
	I0108 22:52:45.073871  407512 crio.go:451] Took 3.395374 seconds to extract the tarball
	I0108 22:52:45.073885  407512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:52:45.117171  407512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:52:45.192337  407512 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:52:45.192376  407512 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:52:45.192513  407512 ssh_runner.go:195] Run: crio config
	I0108 22:52:45.264304  407512 cni.go:84] Creating CNI manager for ""
	I0108 22:52:45.264334  407512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:52:45.264368  407512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:52:45.264394  407512 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-910124 NodeName:addons-910124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:52:45.264564  407512 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-910124"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:52:45.264666  407512 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-910124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:52:45.264724  407512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:52:45.274425  407512 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:52:45.274521  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:52:45.284725  407512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0108 22:52:45.304137  407512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:52:45.323583  407512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0108 22:52:45.342901  407512 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0108 22:52:45.348297  407512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:52:45.362728  407512 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124 for IP: 192.168.39.129
	I0108 22:52:45.362802  407512 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:45.362957  407512 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 22:52:45.640384  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt ...
	I0108 22:52:45.640423  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt: {Name:mkc36a81852ddb14e4b61d277406a892b4ecb346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:45.640584  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key ...
	I0108 22:52:45.640595  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key: {Name:mk8a7ba93c9846e8f1712fa86d3e3c675b202eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:45.640666  407512 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 22:52:46.043234  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt ...
	I0108 22:52:46.043287  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt: {Name:mk54453d77771f2d907d21fe67e8d2434a1dc168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.043571  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key ...
	I0108 22:52:46.043592  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key: {Name:mk793ca51d4d203d77a080934d71e5dbc35c2281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.043772  407512 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.key
	I0108 22:52:46.043791  407512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt with IP's: []
	I0108 22:52:46.303891  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt ...
	I0108 22:52:46.303927  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: {Name:mkcbcbec60054187cbf205990db887d434f8990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.304156  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.key ...
	I0108 22:52:46.304174  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.key: {Name:mkebc8b4d625ab039fd81d53e4de79d49a3c4cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.304269  407512 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0
	I0108 22:52:46.304294  407512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0 with IP's: [192.168.39.129 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:52:46.540144  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0 ...
	I0108 22:52:46.540192  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0: {Name:mk9ef56aba2c2d91ae74376a4f92b9791a8c93c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.540444  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0 ...
	I0108 22:52:46.540474  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0: {Name:mkdb35f8ce92d5cc71fea4e0f9d8c11ad40e3417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.540599  407512 certs.go:337] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt
	I0108 22:52:46.540736  407512 certs.go:341] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key
	I0108 22:52:46.540802  407512 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key
	I0108 22:52:46.540824  407512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt with IP's: []
	I0108 22:52:46.628398  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt ...
	I0108 22:52:46.628450  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt: {Name:mk73cdae1887d583f3ce444f0567f366b63ce828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.628744  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key ...
	I0108 22:52:46.628769  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key: {Name:mk4f6324c9a02eaa2b0d03c93035abfa9c6f9107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.629136  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:52:46.629191  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 22:52:46.629218  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:52:46.629251  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 22:52:46.630150  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:52:46.659248  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:52:46.688201  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:52:46.714910  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:52:46.742329  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:52:46.769959  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:52:46.798677  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:52:46.826210  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 22:52:46.851771  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:52:46.879161  407512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:52:46.899307  407512 ssh_runner.go:195] Run: openssl version
	I0108 22:52:46.905485  407512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:52:46.918066  407512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:46.924414  407512 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:46.924492  407512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:46.931262  407512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:52:46.943314  407512 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:52:46.949343  407512 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:52:46.949463  407512 kubeadm.go:404] StartCluster: {Name:addons-910124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:46.949563  407512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:52:46.949629  407512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:52:46.993524  407512 cri.go:89] found id: ""
	I0108 22:52:46.993625  407512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:52:47.004364  407512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:52:47.015390  407512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:52:47.026894  407512 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:52:47.026983  407512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:52:47.254889  407512 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:53:00.929839  407512 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:53:00.929927  407512 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:53:00.930044  407512 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:53:00.930178  407512 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:53:00.930314  407512 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:53:00.930407  407512 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:53:00.932303  407512 out.go:204]   - Generating certificates and keys ...
	I0108 22:53:00.932407  407512 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:53:00.932510  407512 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:53:00.932608  407512 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:53:00.932685  407512 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:53:00.932776  407512 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:53:00.932857  407512 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:53:00.932943  407512 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:53:00.933073  407512 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-910124 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0108 22:53:00.933152  407512 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:53:00.933303  407512 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-910124 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0108 22:53:00.933390  407512 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:53:00.933475  407512 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:53:00.933538  407512 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:53:00.933613  407512 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:53:00.933692  407512 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:53:00.933768  407512 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:53:00.933858  407512 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:53:00.933930  407512 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:53:00.934042  407512 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:53:00.934136  407512 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:53:00.936109  407512 out.go:204]   - Booting up control plane ...
	I0108 22:53:00.936233  407512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:53:00.936354  407512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:53:00.936479  407512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:53:00.936615  407512 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:53:00.936722  407512 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:53:00.936777  407512 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:53:00.936976  407512 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:53:00.937107  407512 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002317 seconds
	I0108 22:53:00.937256  407512 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:53:00.937428  407512 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:53:00.937513  407512 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:53:00.937685  407512 kubeadm.go:322] [mark-control-plane] Marking the node addons-910124 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:53:00.937756  407512 kubeadm.go:322] [bootstrap-token] Using token: ldtf5y.qptqwomvby4plhf0
	I0108 22:53:00.939424  407512 out.go:204]   - Configuring RBAC rules ...
	I0108 22:53:00.939601  407512 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:53:00.939716  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:53:00.939907  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:53:00.940079  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:53:00.940231  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:53:00.940354  407512 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:53:00.940510  407512 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:53:00.940576  407512 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:53:00.940643  407512 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:53:00.940652  407512 kubeadm.go:322] 
	I0108 22:53:00.940733  407512 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:53:00.940749  407512 kubeadm.go:322] 
	I0108 22:53:00.940833  407512 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:53:00.940849  407512 kubeadm.go:322] 
	I0108 22:53:00.940900  407512 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:53:00.940966  407512 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:53:00.941034  407512 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:53:00.941049  407512 kubeadm.go:322] 
	I0108 22:53:00.941135  407512 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:53:00.941143  407512 kubeadm.go:322] 
	I0108 22:53:00.941217  407512 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:53:00.941228  407512 kubeadm.go:322] 
	I0108 22:53:00.941307  407512 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:53:00.941423  407512 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:53:00.941523  407512 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:53:00.941534  407512 kubeadm.go:322] 
	I0108 22:53:00.941640  407512 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:53:00.941762  407512 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:53:00.941782  407512 kubeadm.go:322] 
	I0108 22:53:00.941910  407512 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ldtf5y.qptqwomvby4plhf0 \
	I0108 22:53:00.942038  407512 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0108 22:53:00.942076  407512 kubeadm.go:322] 	--control-plane 
	I0108 22:53:00.942084  407512 kubeadm.go:322] 
	I0108 22:53:00.942191  407512 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:53:00.942201  407512 kubeadm.go:322] 
	I0108 22:53:00.942311  407512 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ldtf5y.qptqwomvby4plhf0 \
	I0108 22:53:00.942468  407512 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 22:53:00.942497  407512 cni.go:84] Creating CNI manager for ""
	I0108 22:53:00.942512  407512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:53:00.945585  407512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:53:00.947132  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:53:00.979204  407512 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:53:01.063579  407512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:53:01.063682  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:01.063740  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=addons-910124 minikube.k8s.io/updated_at=2024_01_08T22_53_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:01.178717  407512 ops.go:34] apiserver oom_adj: -16
	I0108 22:53:01.381342  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:01.881500  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:02.381990  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:02.881422  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:03.381796  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:03.881664  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:04.381429  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:04.881427  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:05.382340  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:05.881912  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:06.381552  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:06.881708  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:07.381568  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:07.881582  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:08.382296  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:08.881465  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:09.381460  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:09.881673  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:10.381804  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:10.882266  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:11.382349  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:11.881423  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:12.381597  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:12.881364  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:13.382270  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:13.881502  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:14.046599  407512 kubeadm.go:1088] duration metric: took 12.982995697s to wait for elevateKubeSystemPrivileges.
	I0108 22:53:14.046652  407512 kubeadm.go:406] StartCluster complete in 27.097199836s
	I0108 22:53:14.046680  407512 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:53:14.046835  407512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:53:14.047467  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:53:14.047768  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:53:14.047875  407512 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 22:53:14.048007  407512 addons.go:69] Setting yakd=true in profile "addons-910124"
	I0108 22:53:14.048049  407512 addons.go:69] Setting metrics-server=true in profile "addons-910124"
	I0108 22:53:14.048071  407512 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-910124"
	I0108 22:53:14.048081  407512 addons.go:69] Setting registry=true in profile "addons-910124"
	I0108 22:53:14.048081  407512 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-910124"
	I0108 22:53:14.048097  407512 addons.go:69] Setting storage-provisioner=true in profile "addons-910124"
	I0108 22:53:14.048101  407512 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:53:14.048115  407512 addons.go:69] Setting volumesnapshots=true in profile "addons-910124"
	I0108 22:53:14.048117  407512 addons.go:237] Setting addon storage-provisioner=true in "addons-910124"
	I0108 22:53:14.048126  407512 addons.go:237] Setting addon volumesnapshots=true in "addons-910124"
	I0108 22:53:14.048146  407512 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-910124"
	I0108 22:53:14.048104  407512 addons.go:237] Setting addon registry=true in "addons-910124"
	I0108 22:53:14.048193  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048193  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048071  407512 addons.go:237] Setting addon metrics-server=true in "addons-910124"
	I0108 22:53:14.048301  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048145  407512 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-910124"
	I0108 22:53:14.048052  407512 addons.go:69] Setting cloud-spanner=true in profile "addons-910124"
	I0108 22:53:14.048465  407512 addons.go:237] Setting addon cloud-spanner=true in "addons-910124"
	I0108 22:53:14.048510  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048741  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048741  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048773  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048777  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048194  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048800  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048801  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048783  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048028  407512 addons.go:69] Setting gcp-auth=true in profile "addons-910124"
	I0108 22:53:14.048860  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048017  407512 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-910124"
	I0108 22:53:14.048879  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048890  407512 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-910124"
	I0108 22:53:14.048012  407512 addons.go:69] Setting ingress=true in profile "addons-910124"
	I0108 22:53:14.048903  407512 mustload.go:65] Loading cluster: addons-910124
	I0108 22:53:14.048905  407512 addons.go:237] Setting addon ingress=true in "addons-910124"
	I0108 22:53:14.048193  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048072  407512 addons.go:237] Setting addon yakd=true in "addons-910124"
	I0108 22:53:14.048912  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048034  407512 addons.go:69] Setting default-storageclass=true in profile "addons-910124"
	I0108 22:53:14.048974  407512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-910124"
	I0108 22:53:14.048036  407512 addons.go:69] Setting helm-tiller=true in profile "addons-910124"
	I0108 22:53:14.048986  407512 addons.go:237] Setting addon helm-tiller=true in "addons-910124"
	I0108 22:53:14.048034  407512 addons.go:69] Setting ingress-dns=true in profile "addons-910124"
	I0108 22:53:14.049013  407512 addons.go:237] Setting addon ingress-dns=true in "addons-910124"
	I0108 22:53:14.048074  407512 addons.go:69] Setting inspektor-gadget=true in profile "addons-910124"
	I0108 22:53:14.049085  407512 addons.go:237] Setting addon inspektor-gadget=true in "addons-910124"
	I0108 22:53:14.049095  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049116  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049139  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049243  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049269  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049432  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049518  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049557  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049667  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049696  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049703  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049778  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049779  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049812  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.050125  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.050200  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.050239  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.050476  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.050496  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.050497  407512 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:53:14.050887  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.072180  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0108 22:53:14.072408  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0108 22:53:14.072878  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.073424  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.073446  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.073824  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.074485  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.074534  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.074844  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41345
	I0108 22:53:14.075129  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.075414  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.075986  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.076018  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.076246  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.076265  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.076407  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.076657  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.077209  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.077255  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.078365  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0108 22:53:14.079027  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.079065  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.079304  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.080005  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.080073  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.080631  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.081348  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.081374  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.083616  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.083657  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.083805  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.083848  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.083997  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.084036  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.092804  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0108 22:53:14.093521  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.094253  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.094282  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.094362  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0108 22:53:14.095009  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.095790  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.095855  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.099848  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.100694  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.100726  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.101274  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.101967  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.102016  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.103127  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0108 22:53:14.103840  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.104405  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.104432  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.104826  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.105403  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.105446  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.108622  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0108 22:53:14.109166  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.109680  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.109701  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.110072  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.110276  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.112717  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.114755  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 22:53:14.114360  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I0108 22:53:14.114506  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0108 22:53:14.116040  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 22:53:14.116656  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.117800  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 22:53:14.119180  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 22:53:14.118513  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.118827  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.119018  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0108 22:53:14.120386  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 22:53:14.120471  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.120958  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.121855  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 22:53:14.123241  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 22:53:14.124485  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 22:53:14.125649  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 22:53:14.125668  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 22:53:14.122438  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.125692  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.125714  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.124587  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.125767  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.122857  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.123927  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
	I0108 22:53:14.124725  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0108 22:53:14.126878  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.126946  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.127543  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.127929  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.128004  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.128227  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.128245  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.129171  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.129229  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.129541  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.129614  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.129775  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.129794  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.130221  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.130255  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.130478  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.130517  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.130760  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.130837  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.130859  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.131532  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.131756  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.131910  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.132063  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.133920  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I0108 22:53:14.134951  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.137144  407512 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 22:53:14.138272  407512 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 22:53:14.138293  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 22:53:14.137925  407512 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-910124"
	I0108 22:53:14.138317  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.138356  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.136546  407512 addons.go:237] Setting addon default-storageclass=true in "addons-910124"
	I0108 22:53:14.138403  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.138772  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.138807  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.138841  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.138902  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.137972  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0108 22:53:14.135815  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.140301  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.140977  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.140997  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.141505  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.141737  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.143216  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.143239  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.143311  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.143332  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.143352  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.143399  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.143701  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.143765  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.143897  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.144014  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.144258  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.144301  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.146764  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.148997  407512 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 22:53:14.147454  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0108 22:53:14.147799  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0108 22:53:14.150614  407512 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 22:53:14.150638  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 22:53:14.150669  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.151247  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.151340  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.152046  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.152080  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.152272  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.152291  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.152728  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.152782  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.153138  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.153202  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.154356  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0108 22:53:14.154983  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0108 22:53:14.155614  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.156164  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.156468  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.156582  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0108 22:53:14.156755  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.156998  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.157015  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.157171  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0108 22:53:14.158938  407512 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 22:53:14.157388  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.157442  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.157694  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.157930  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.157975  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.158210  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.160481  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.160767  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.161533  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.161555  407512 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 22:53:14.162552  407512 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 22:53:14.163915  407512 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 22:53:14.163937  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 22:53:14.163962  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.165510  407512 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:53:14.165536  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 22:53:14.165563  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.162625  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.162056  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.165702  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.162143  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.165760  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.161806  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.166842  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.166957  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0108 22:53:14.167044  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.167105  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.167215  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.167810  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.169928  407512 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 22:53:14.168237  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.167821  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.169434  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.169598  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.169778  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46179
	I0108 22:53:14.169822  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.170077  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.170318  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.171035  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.171651  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.171753  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.171785  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.171913  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:53:14.171933  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:53:14.171950  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.171950  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.171995  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.172090  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.172149  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.172292  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.172370  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.172388  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.172581  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.172672  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.173160  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.173181  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.173686  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.173792  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.174063  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.174096  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.174211  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.174253  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.174384  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.174395  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.174581  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.174806  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.175058  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.177386  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.178083  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.178148  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.178169  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.178190  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.178199  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0108 22:53:14.180521  407512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:53:14.178721  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.178723  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.179002  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.182156  407512 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:53:14.182179  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:53:14.182208  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.182699  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.182725  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.184901  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 22:53:14.182949  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.183196  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.186497  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 22:53:14.186510  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 22:53:14.186535  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.186944  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.187320  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.187545  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.187597  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.189030  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.189319  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0108 22:53:14.189561  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.189582  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.189763  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.189916  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.190021  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.190039  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.190213  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.190444  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.190469  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.190506  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.190519  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.190536  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.190703  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.190849  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.190966  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.191101  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.191297  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.193233  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.195191  407512 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 22:53:14.196668  407512 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 22:53:14.196698  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 22:53:14.196729  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.199856  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0108 22:53:14.200263  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.200624  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.200868  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.200891  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.201445  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.201508  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.201525  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.201662  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.201731  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.201868  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.201921  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.202030  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.207780  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0108 22:53:14.208051  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I0108 22:53:14.208469  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.208636  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.209216  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.209237  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.209243  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.209264  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.209807  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.209859  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0108 22:53:14.210092  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.210114  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.210319  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.210407  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.210896  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.210912  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.211389  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.211641  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.212923  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.213008  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.215261  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 22:53:14.213819  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.218415  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:14.216992  407512 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 22:53:14.219479  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0108 22:53:14.221033  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I0108 22:53:14.221425  407512 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 22:53:14.222006  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.222556  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:14.224113  407512 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:53:14.224132  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 22:53:14.224147  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.224118  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 22:53:14.224186  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 22:53:14.224194  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.223075  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.223256  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.224245  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.222842  407512 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:53:14.224279  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 22:53:14.224286  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.224872  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.225484  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.225528  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.225770  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.225795  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.227076  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.227315  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.228065  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.228364  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.228393  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.228564  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.228700  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.228793  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.228924  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.229149  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.229661  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.229683  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.229851  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.230031  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.230095  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.230147  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.230170  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.230322  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.231807  407512 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 22:53:14.230967  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.233508  407512 out.go:177]   - Using image docker.io/busybox:stable
	W0108 22:53:14.231546  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38346->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.231001  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.233702  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.235288  407512 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:53:14.235304  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 22:53:14.235321  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.235367  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.235387  407512 retry.go:31] will retry after 324.086744ms: ssh: handshake failed: read tcp 192.168.39.1:38346->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.235419  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.235563  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	W0108 22:53:14.236669  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38352->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.236760  407512 retry.go:31] will retry after 156.651489ms: ssh: handshake failed: read tcp 192.168.39.1:38352->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.238734  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.247564  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46181
	I0108 22:53:14.247973  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.247987  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.248039  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.248544  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.248597  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.248735  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.249019  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.249038  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.249041  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.249396  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.249541  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	W0108 22:53:14.250193  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 22:53:14.250221  407512 retry.go:31] will retry after 345.425047ms: ssh: handshake failed: EOF
	I0108 22:53:14.251275  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.251670  407512 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:53:14.251720  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:53:14.251753  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.255427  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.255948  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.255987  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.256258  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.256544  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.256728  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.256896  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	W0108 22:53:14.258304  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38372->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.258338  407512 retry.go:31] will retry after 319.615904ms: ssh: handshake failed: read tcp 192.168.39.1:38372->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.395533  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 22:53:14.395577  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 22:53:14.415900  407512 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 22:53:14.415940  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 22:53:14.456959  407512 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 22:53:14.457008  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 22:53:14.466527  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:53:14.466547  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 22:53:14.480477  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 22:53:14.486578  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:53:14.492454  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:53:14.536820  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 22:53:14.536870  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 22:53:14.574109  407512 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 22:53:14.574137  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 22:53:14.600649  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:53:14.601795  407512 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 22:53:14.601821  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 22:53:14.641710  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:53:14.658678  407512 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 22:53:14.658712  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 22:53:14.659662  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 22:53:14.659685  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 22:53:14.659775  407512 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:53:14.659793  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 22:53:14.718709  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:53:14.718749  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:53:14.765937  407512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-910124" context rescaled to 1 replicas
	I0108 22:53:14.766006  407512 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:53:14.768025  407512 out.go:177] * Verifying Kubernetes components...
	I0108 22:53:14.769572  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:53:14.837124  407512 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 22:53:14.837212  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 22:53:14.862321  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 22:53:14.862354  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 22:53:14.878287  407512 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 22:53:14.878323  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 22:53:14.960557  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 22:53:14.960587  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 22:53:15.081651  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:53:15.095894  407512 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:53:15.095923  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 22:53:15.145866  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:53:15.145902  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:53:15.163804  407512 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 22:53:15.163832  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 22:53:15.184945  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:53:15.212543  407512 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 22:53:15.212583  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 22:53:15.222343  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 22:53:15.222378  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 22:53:15.226785  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:53:15.247780  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:53:15.271053  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 22:53:15.271094  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 22:53:15.293859  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:53:15.365424  407512 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 22:53:15.365480  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 22:53:15.376247  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:53:15.385984  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 22:53:15.386028  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 22:53:15.440291  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 22:53:15.440320  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 22:53:15.443081  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 22:53:15.443098  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 22:53:15.496655  407512 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:15.496692  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 22:53:15.530325  407512 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 22:53:15.530353  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 22:53:15.573519  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 22:53:15.573551  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 22:53:15.582542  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:53:15.582572  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 22:53:15.610835  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:15.686977  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 22:53:15.687038  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 22:53:15.715637  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:53:15.720203  407512 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:53:15.720234  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 22:53:15.822526  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:53:15.831216  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 22:53:15.831249  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 22:53:15.914183  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:53:15.914216  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 22:53:16.018223  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:53:21.118667  407512 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.632025985s)
	I0108 22:53:21.118742  407512 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 22:53:21.118767  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.626266856s)
	I0108 22:53:21.118863  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.118881  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.118957  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.638428198s)
	I0108 22:53:21.119018  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.119091  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.119311  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:21.119383  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119395  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119408  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.119418  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.119509  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119523  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119536  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.119545  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.119681  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119696  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119818  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119843  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119881  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.305314  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.704605113s)
	I0108 22:53:22.305410  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.305433  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.305945  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.305970  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.305993  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.306004  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.306386  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.306466  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.306487  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.736952  407512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 22:53:22.736997  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:22.740578  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:22.741075  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:22.741116  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:22.741307  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:22.741579  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:22.741766  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:22.741944  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:22.947905  407512 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.178279802s)
	I0108 22:53:22.947957  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.866261751s)
	I0108 22:53:22.948020  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.306246694s)
	I0108 22:53:22.948050  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948074  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948154  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948181  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948537  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.948582  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.948591  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.948602  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948611  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948661  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.948747  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.948765  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.948797  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948807  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948885  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.948900  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.949378  407512 node_ready.go:35] waiting up to 6m0s for node "addons-910124" to be "Ready" ...
	I0108 22:53:22.950796  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.950836  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.950847  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.950859  407512 addons.go:473] Verifying addon registry=true in "addons-910124"
	I0108 22:53:22.953031  407512 out.go:177] * Verifying registry addon...
	I0108 22:53:22.953861  407512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 22:53:22.955631  407512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 22:53:23.018782  407512 addons.go:237] Setting addon gcp-auth=true in "addons-910124"
	I0108 22:53:23.018877  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:23.019452  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:23.019511  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:23.023215  407512 node_ready.go:49] node "addons-910124" has status "Ready":"True"
	I0108 22:53:23.023248  407512 node_ready.go:38] duration metric: took 73.848106ms waiting for node "addons-910124" to be "Ready" ...
	I0108 22:53:23.023262  407512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:53:23.036278  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0108 22:53:23.036907  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:23.037654  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:23.037685  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:23.038171  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:23.038855  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:23.038912  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:23.056988  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0108 22:53:23.057583  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:23.058310  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:23.058347  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:23.058849  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:23.059170  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:23.061808  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:23.062238  407512 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 22:53:23.062275  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:23.066529  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:23.067142  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:23.067196  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:23.067492  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:23.067790  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:23.068066  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:23.068265  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:23.143994  407512 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:53:23.144033  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:23.205417  407512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:23.265392  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.080397188s)
	I0108 22:53:23.265485  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:23.265512  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:23.265886  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:23.265948  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:23.265972  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:23.265984  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:23.265994  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:23.266346  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:23.266421  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:23.266442  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:23.485290  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:23.485329  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:23.485777  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:23.485792  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:23.485811  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:23.581118  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:23.983064  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:24.469021  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:24.990529  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:25.014553  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.787715366s)
	I0108 22:53:25.014611  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014626  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014623  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.766798292s)
	I0108 22:53:25.014664  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.720766667s)
	I0108 22:53:25.014705  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014731  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014746  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.6384685s)
	I0108 22:53:25.014758  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014762  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014777  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014796  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014993  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.299319509s)
	I0108 22:53:25.015024  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015049  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015054  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015066  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015069  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015080  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015089  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015096  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.192531806s)
	I0108 22:53:25.015121  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015126  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015133  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015164  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015174  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015182  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015191  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015189  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015200  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015242  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015251  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015260  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015268  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015253  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.404030456s)
	W0108 22:53:25.015308  407512 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:53:25.015386  407512 retry.go:31] will retry after 348.822362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:53:25.015388  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015423  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015422  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015429  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015436  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015449  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015461  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015461  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015472  407512 addons.go:473] Verifying addon ingress=true in "addons-910124"
	I0108 22:53:25.015509  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.019466  407512 out.go:177] * Verifying ingress addon...
	I0108 22:53:25.015474  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015551  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015451  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015793  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015828  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015863  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015890  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021135  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021160  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021162  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021188  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.021191  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.021165  407512 addons.go:473] Verifying addon metrics-server=true in "addons-910124"
	I0108 22:53:25.021199  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.021463  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021480  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021579  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.021611  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021612  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.021624  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021628  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021636  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.023182  407512 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-910124 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 22:53:25.022182  407512 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 22:53:25.048707  407512 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 22:53:25.048738  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:25.084061  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.084110  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.084539  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.084566  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.084582  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.239986  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:25.365439  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:25.551112  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:25.587053  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:25.830236  407512 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.767955569s)
	I0108 22:53:25.832481  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:25.830626  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.812348107s)
	I0108 22:53:25.834198  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.835867  407512 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 22:53:25.834219  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.837788  407512 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 22:53:25.836412  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.836468  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.837861  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 22:53:25.837869  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.838027  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.838041  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.838414  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.838439  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.838445  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.838460  407512 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-910124"
	I0108 22:53:25.840248  407512 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 22:53:25.842836  407512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 22:53:25.890887  407512 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 22:53:25.890911  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 22:53:25.962178  407512 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:53:25.962218  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 22:53:26.039387  407512 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:53:26.039424  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.080390  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:53:26.157822  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:26.248348  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:26.427394  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.485825  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:26.560614  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:26.874522  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.962296  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:27.046287  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:27.349402  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:27.398898  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:27.547065  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:27.560038  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:27.853299  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:27.989275  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:28.173564  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:28.354219  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:28.471049  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:28.543646  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:28.567296  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.201785045s)
	I0108 22:53:28.567393  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:28.567416  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:28.567733  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:28.567808  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:28.567833  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:28.567850  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:28.567862  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:28.568309  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:28.568351  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:28.568371  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:28.879544  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:28.986465  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:29.071836  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.991388015s)
	I0108 22:53:29.071929  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:29.071944  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:29.072382  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:29.072468  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:29.072485  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:29.072504  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:29.072533  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:29.073004  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:29.073029  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:29.074433  407512 addons.go:473] Verifying addon gcp-auth=true in "addons-910124"
	I0108 22:53:29.076694  407512 out.go:177] * Verifying gcp-auth addon...
	I0108 22:53:29.079630  407512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 22:53:29.118876  407512 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 22:53:29.118910  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:29.119134  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:29.352776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:29.463248  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:29.532210  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:29.589766  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:29.733537  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:29.858429  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.007618  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:30.045121  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:30.085467  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:30.356223  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.461648  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:30.530761  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:30.587213  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:30.851807  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.963282  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:31.030537  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:31.085262  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:31.349415  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:31.462657  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:31.537383  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:31.584324  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:31.857420  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:31.961728  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:32.035139  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:32.085002  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:32.215480  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:32.350450  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:32.467949  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:32.548207  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:32.590574  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:32.861646  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:32.963648  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:33.032209  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:33.082964  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:33.350229  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:33.462614  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:33.530713  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:33.587491  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:33.852254  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:33.964689  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:34.035982  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:34.086305  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:34.217022  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:34.352717  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:34.462427  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:34.531705  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:34.585842  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:34.866227  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:34.967156  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:35.038741  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:35.090684  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:35.349174  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:35.461632  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:35.537152  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:35.587776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:35.856138  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:35.961986  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:36.030347  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:36.084316  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:36.495954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:36.502389  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:36.504260  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:36.534265  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:36.588934  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:36.849346  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:36.964067  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:37.030457  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:37.088305  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:37.354289  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:37.491293  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:37.543721  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:37.605884  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:37.857287  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:37.962267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:38.029470  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:38.084650  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:38.349989  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:38.462114  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:38.530503  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:38.589108  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:38.714538  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:38.850234  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:38.966107  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:39.037336  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:39.086277  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:39.409826  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:39.465135  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:39.548550  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:39.586595  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:39.863458  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:39.965144  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:40.029580  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:40.084201  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:40.366967  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:40.462309  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:40.531406  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:40.583493  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:40.717782  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:40.857432  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:40.963076  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:41.034754  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.084050  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:41.356337  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:41.472073  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:41.532975  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.584207  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:41.849515  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:41.965355  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:42.030101  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:42.099155  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:42.360499  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:42.460510  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:42.529527  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:42.593292  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:42.821522  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:42.850988  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:42.960846  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:43.060509  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:43.085113  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:43.351843  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:43.462538  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:43.539900  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:43.593232  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:43.849721  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:43.963014  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:44.043878  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:44.086258  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:44.357724  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:44.460879  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:44.530932  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:44.641297  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:44.853061  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.330678  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:45.331650  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:45.349738  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:45.353810  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:45.362981  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.468199  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:45.530429  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:45.583718  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:45.850036  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.961776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:46.030544  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:46.085015  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:46.350003  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:46.462611  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:46.532264  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:46.584019  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:46.850350  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:46.968845  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:47.033895  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:47.091601  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:47.361574  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:47.462375  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:47.549971  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:47.584729  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:47.721879  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:47.849787  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:47.962376  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:48.030258  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:48.083896  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:48.402869  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:48.461775  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:48.536166  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:48.585219  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:48.857678  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:48.963237  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:49.030464  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:49.086899  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:49.349538  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:49.462620  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:49.530439  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:49.584393  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:49.849685  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:49.961525  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:50.071890  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:50.084628  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:50.225246  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:50.353119  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:50.461894  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:50.529991  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:50.585213  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:50.865294  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:50.962276  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:51.030070  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:51.087400  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:51.349314  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:51.462954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:51.530631  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:51.583717  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:51.853394  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:51.961727  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:52.031137  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:52.085394  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:52.350331  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:52.463259  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:52.531065  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:52.584153  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:52.715564  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:52.851339  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:52.961929  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:53.032101  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:53.085734  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:53.350290  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:53.461129  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:53.531108  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:53.584908  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:53.850148  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:53.963202  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:54.032208  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:54.090024  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:54.356360  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:54.461673  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:54.531735  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:54.585208  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:54.849635  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:54.962144  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:55.030374  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:55.084523  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:55.214321  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:55.350258  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:55.461778  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:55.533264  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:55.584683  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:55.852075  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:55.961387  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:56.030128  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:56.085432  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:56.245063  407512 pod_ready.go:92] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.245116  407512 pod_ready.go:81] duration metric: took 33.039649283s waiting for pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.245138  407512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.264053  407512 pod_ready.go:92] pod "etcd-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.264099  407512 pod_ready.go:81] duration metric: took 18.952105ms waiting for pod "etcd-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.264115  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.275669  407512 pod_ready.go:92] pod "kube-apiserver-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.275712  407512 pod_ready.go:81] duration metric: took 11.586815ms waiting for pod "kube-apiserver-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.275732  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.292749  407512 pod_ready.go:92] pod "kube-controller-manager-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.292785  407512 pod_ready.go:81] duration metric: took 17.043212ms waiting for pod "kube-controller-manager-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.292804  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qzsv5" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.302383  407512 pod_ready.go:92] pod "kube-proxy-qzsv5" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.302414  407512 pod_ready.go:81] duration metric: took 9.601523ms waiting for pod "kube-proxy-qzsv5" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.302426  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.352819  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:56.463685  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:56.531641  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:56.584176  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:56.611203  407512 pod_ready.go:92] pod "kube-scheduler-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.611236  407512 pod_ready.go:81] duration metric: took 308.803119ms waiting for pod "kube-scheduler-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.611248  407512 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.849625  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:56.964740  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:57.029271  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:57.087336  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:57.355597  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:57.461277  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:57.530902  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:57.584569  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:57.851952  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:57.962163  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:58.029338  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:58.084855  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:58.353747  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:58.468350  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:58.530348  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:58.586166  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:58.621220  407512 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:58.850519  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:58.961060  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:59.030056  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:59.085256  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:59.348655  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:59.461509  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:59.530093  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:59.586002  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:59.849731  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:59.963984  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:00.032316  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:00.084798  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:00.351095  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:00.462588  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:00.530347  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:00.584893  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:00.849285  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:00.975288  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:01.032452  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:01.083923  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:01.126803  407512 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:01.350329  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:01.462604  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:01.532692  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:01.585064  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:01.858550  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:01.960949  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:02.030083  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:02.084780  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:02.362315  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:02.471502  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:02.556350  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:02.585467  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:02.636236  407512 pod_ready.go:92] pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:54:02.636292  407512 pod_ready.go:81] duration metric: took 6.025034591s waiting for pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace to be "Ready" ...
	I0108 22:54:02.636311  407512 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace to be "Ready" ...
	I0108 22:54:02.849707  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:02.961865  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:03.030145  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:03.085954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:03.352801  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:03.470121  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:03.530277  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:03.587712  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:03.850508  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:03.964003  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:04.030842  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:04.085693  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:04.509608  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:04.522875  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:04.806866  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:04.807332  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:04.826998  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:04.855790  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:04.962417  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:05.029196  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.084410  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:05.352227  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:05.462511  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:05.533515  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.588113  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:05.849670  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:05.962918  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:06.032744  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:06.085678  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:06.350050  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:06.466258  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:06.529942  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:06.585383  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:06.851287  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:06.965282  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:07.030598  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:07.084225  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:07.154719  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:07.350965  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:07.463025  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:07.530226  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:07.584574  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:07.851212  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:07.962682  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:08.032125  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:08.085267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:08.349554  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:08.462700  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:08.530429  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:08.584353  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:08.851397  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:08.965109  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:09.030000  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:09.086543  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:09.349646  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:09.462585  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:09.532690  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:09.584756  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:09.645208  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:09.853340  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:09.964643  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:10.030931  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:10.085246  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:10.696816  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:10.714751  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:10.715002  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:10.716244  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:10.850535  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:10.962827  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:11.031333  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:11.083905  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:11.350213  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:11.464073  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:11.529791  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:11.584657  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:11.849251  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:11.962276  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:12.030861  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:12.083923  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:12.145857  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:12.351092  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:12.463219  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:12.531880  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:12.584618  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:13.206267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.216850  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:13.228573  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:13.234083  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:13.350085  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.466267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:13.531425  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:13.584169  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:13.849619  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.961702  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:14.030388  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:14.084760  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:14.350525  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:14.462423  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:14.530385  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:14.585959  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:14.650294  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:14.849688  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:14.961936  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:15.029179  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:15.085199  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:15.144121  407512 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"True"
	I0108 22:54:15.144149  407512 pod_ready.go:81] duration metric: took 12.507829966s waiting for pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace to be "Ready" ...
	I0108 22:54:15.144171  407512 pod_ready.go:38] duration metric: took 52.120894643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:54:15.144192  407512 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:54:15.144232  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:54:15.144297  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:54:15.231327  407512 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:15.231369  407512 cri.go:89] found id: ""
	I0108 22:54:15.231380  407512 logs.go:284] 1 containers: [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9]
	I0108 22:54:15.231458  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.256567  407512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:54:15.256677  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:54:15.319314  407512 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:15.319346  407512 cri.go:89] found id: ""
	I0108 22:54:15.319368  407512 logs.go:284] 1 containers: [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842]
	I0108 22:54:15.319428  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.331046  407512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:54:15.331141  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:54:15.351207  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:15.431035  407512 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:15.431075  407512 cri.go:89] found id: ""
	I0108 22:54:15.431085  407512 logs.go:284] 1 containers: [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074]
	I0108 22:54:15.431158  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.442387  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:54:15.442481  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:54:15.462160  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:15.531167  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:15.543237  407512 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:15.543265  407512 cri.go:89] found id: ""
	I0108 22:54:15.543276  407512 logs.go:284] 1 containers: [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e]
	I0108 22:54:15.543338  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.551491  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:54:15.551600  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:54:15.585776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:15.650081  407512 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:15.650114  407512 cri.go:89] found id: ""
	I0108 22:54:15.650128  407512 logs.go:284] 1 containers: [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae]
	I0108 22:54:15.650214  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.666439  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:54:15.666545  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:54:15.778534  407512 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:15.778562  407512 cri.go:89] found id: ""
	I0108 22:54:15.778571  407512 logs.go:284] 1 containers: [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a]
	I0108 22:54:15.778637  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.787703  407512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:54:15.787825  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:54:15.853478  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:15.900606  407512 cri.go:89] found id: ""
	I0108 22:54:15.900639  407512 logs.go:284] 0 containers: []
	W0108 22:54:15.900648  407512 logs.go:286] No container was found matching "kindnet"
	I0108 22:54:15.900662  407512 logs.go:123] Gathering logs for coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] ...
	I0108 22:54:15.900682  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:15.962609  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:15.979124  407512 logs.go:123] Gathering logs for kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] ...
	I0108 22:54:15.979164  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:16.033658  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:16.084479  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:16.103844  407512 logs.go:123] Gathering logs for kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] ...
	I0108 22:54:16.103890  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:16.206314  407512 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:54:16.206366  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:54:16.350708  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:16.462437  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:16.530334  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:16.585798  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:16.633594  407512 logs.go:123] Gathering logs for container status ...
	I0108 22:54:16.633645  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:54:16.735780  407512 logs.go:123] Gathering logs for kubelet ...
	I0108 22:54:16.735813  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 22:54:16.846919  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:16.847105  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:16.852687  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:16.865973  407512 logs.go:123] Gathering logs for dmesg ...
	I0108 22:54:16.866031  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:54:16.905038  407512 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:54:16.905091  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:54:16.966082  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:17.031596  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:17.084532  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:17.166846  407512 logs.go:123] Gathering logs for kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] ...
	I0108 22:54:17.166899  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:17.230940  407512 logs.go:123] Gathering logs for etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] ...
	I0108 22:54:17.231008  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:17.347511  407512 logs.go:123] Gathering logs for kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] ...
	I0108 22:54:17.347558  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:17.349507  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:17.394343  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:17.394410  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:54:17.394549  407512 out.go:239] X Problems detected in kubelet:
	W0108 22:54:17.394569  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:17.394580  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:17.394595  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:17.394608  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:17.461326  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:17.531793  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:17.587809  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:17.853819  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:17.962004  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:18.031900  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:18.088046  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:18.350968  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:18.463846  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:18.529543  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:18.585159  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:18.850853  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:18.963561  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:19.031096  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:19.084807  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:19.358524  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:19.464549  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:19.534277  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:19.587135  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:19.850878  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:19.965920  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:20.049225  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:20.090461  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:20.365097  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:20.489954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:20.529905  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:20.585561  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:20.851148  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:20.961580  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:21.030377  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:21.085050  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:21.351010  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:21.462119  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:21.530899  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:21.584992  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:21.851183  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:21.964889  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:22.032468  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:22.084383  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:22.349249  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:22.461804  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:22.531272  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:22.585296  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:22.852966  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:22.961583  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:23.030592  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:23.084835  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:23.349533  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:23.461190  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:23.538313  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:23.584299  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:23.851184  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:23.960827  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:24.030250  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:24.089200  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:24.351586  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:24.462858  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:24.531031  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:24.586411  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:24.850554  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:24.962798  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:25.037696  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:25.088196  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:25.362995  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:25.462738  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:25.531973  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:25.585413  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:25.850814  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:25.966180  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:26.029962  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:26.085542  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:26.350557  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:26.462260  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:26.530914  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:26.586631  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:26.850090  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:26.962933  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:27.030114  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:27.083448  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:27.396115  407512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:54:27.453376  407512 api_server.go:72] duration metric: took 1m12.687320115s to wait for apiserver process to appear ...
	I0108 22:54:27.453417  407512 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:54:27.453470  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:54:27.453548  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:54:27.573794  407512 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:27.573829  407512 cri.go:89] found id: ""
	I0108 22:54:27.573852  407512 logs.go:284] 1 containers: [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9]
	I0108 22:54:27.573927  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.597369  407512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:54:27.597466  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:54:27.698918  407512 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:27.698957  407512 cri.go:89] found id: ""
	I0108 22:54:27.698969  407512 logs.go:284] 1 containers: [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842]
	I0108 22:54:27.699072  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.726352  407512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:54:27.726454  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:54:27.751673  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:27.753439  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:27.758031  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:27.758454  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:27.825675  407512 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:27.825700  407512 cri.go:89] found id: ""
	I0108 22:54:27.825711  407512 logs.go:284] 1 containers: [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074]
	I0108 22:54:27.825775  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.834803  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:54:27.834896  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:54:27.853731  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:27.927068  407512 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:27.927112  407512 cri.go:89] found id: ""
	I0108 22:54:27.927126  407512 logs.go:284] 1 containers: [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e]
	I0108 22:54:27.927208  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.932105  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:54:27.932177  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:54:27.962872  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:28.031219  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:28.078753  407512 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:28.078786  407512 cri.go:89] found id: ""
	I0108 22:54:28.078796  407512 logs.go:284] 1 containers: [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae]
	I0108 22:54:28.078853  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:28.084792  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:28.091297  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:54:28.091406  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:54:28.172616  407512 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:28.172646  407512 cri.go:89] found id: ""
	I0108 22:54:28.172659  407512 logs.go:284] 1 containers: [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a]
	I0108 22:54:28.172733  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:28.186169  407512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:54:28.186232  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:54:28.289669  407512 cri.go:89] found id: ""
	I0108 22:54:28.289696  407512 logs.go:284] 0 containers: []
	W0108 22:54:28.289705  407512 logs.go:286] No container was found matching "kindnet"
	I0108 22:54:28.289717  407512 logs.go:123] Gathering logs for kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] ...
	I0108 22:54:28.289738  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:28.351822  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:28.391689  407512 logs.go:123] Gathering logs for kubelet ...
	I0108 22:54:28.391739  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:54:28.462480  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0108 22:54:28.480025  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:28.480210  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:28.499203  407512 logs.go:123] Gathering logs for dmesg ...
	I0108 22:54:28.499238  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:54:28.530750  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:28.547794  407512 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:54:28.547837  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:54:28.584331  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:28.849473  407512 logs.go:123] Gathering logs for kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] ...
	I0108 22:54:28.849533  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:28.853795  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:28.950311  407512 logs.go:123] Gathering logs for etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] ...
	I0108 22:54:28.950371  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:28.962622  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:29.031992  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:29.059949  407512 logs.go:123] Gathering logs for coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] ...
	I0108 22:54:29.059986  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:29.085032  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:29.150376  407512 logs.go:123] Gathering logs for kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] ...
	I0108 22:54:29.150421  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:29.247990  407512 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:54:29.248038  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:54:29.351968  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:29.462754  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:29.532279  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:29.584551  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:29.684754  407512 logs.go:123] Gathering logs for container status ...
	I0108 22:54:29.684811  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:54:29.761957  407512 logs.go:123] Gathering logs for kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] ...
	I0108 22:54:29.762007  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:29.860465  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:29.906592  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:29.906659  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:54:29.906773  407512 out.go:239] X Problems detected in kubelet:
	W0108 22:54:29.906793  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:29.906813  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:29.906828  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:29.906837  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:29.961602  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:30.030655  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:30.083825  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:30.351242  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:30.462022  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:30.545969  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:30.592620  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:30.852761  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:30.962402  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:31.035727  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:31.109360  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:31.350955  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:31.465000  407512 kapi.go:107] duration metric: took 1m8.509372886s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 22:54:31.530021  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:31.586089  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:31.862191  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:32.030317  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:32.095075  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:32.349427  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:32.532791  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:32.585839  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:32.852207  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:33.030368  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:33.085838  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:33.351051  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:33.531133  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:33.584881  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:33.853562  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:34.030356  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:34.085453  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:34.361940  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:34.532834  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:34.589475  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:34.849816  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:35.051994  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:35.084479  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:35.350381  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:35.530730  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:35.583961  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:35.853821  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:36.030587  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:36.085695  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:36.354518  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:36.532353  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:36.588316  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:36.851485  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:37.030387  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:37.085075  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:37.349858  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:37.533092  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:37.584986  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:37.850366  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:38.030748  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:38.084292  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:38.352160  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:38.531030  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:38.585101  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:38.849843  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:39.031571  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:39.084693  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:39.356883  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:39.530392  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:39.584836  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:39.851245  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:39.908268  407512 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0108 22:54:39.915018  407512 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0108 22:54:39.916444  407512 api_server.go:141] control plane version: v1.28.4
	I0108 22:54:39.916493  407512 api_server.go:131] duration metric: took 12.463065793s to wait for apiserver health ...
	I0108 22:54:39.916504  407512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:54:39.916530  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:54:39.916598  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:54:39.967466  407512 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:39.967497  407512 cri.go:89] found id: ""
	I0108 22:54:39.967507  407512 logs.go:284] 1 containers: [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9]
	I0108 22:54:39.967572  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:39.986014  407512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:54:39.986106  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:54:40.031259  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:40.053801  407512 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:40.053839  407512 cri.go:89] found id: ""
	I0108 22:54:40.053851  407512 logs.go:284] 1 containers: [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842]
	I0108 22:54:40.053915  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.059802  407512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:54:40.059893  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:54:40.085160  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:40.111011  407512 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:40.111055  407512 cri.go:89] found id: ""
	I0108 22:54:40.111071  407512 logs.go:284] 1 containers: [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074]
	I0108 22:54:40.111140  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.116791  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:54:40.116886  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:54:40.167692  407512 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:40.167726  407512 cri.go:89] found id: ""
	I0108 22:54:40.167737  407512 logs.go:284] 1 containers: [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e]
	I0108 22:54:40.167806  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.173863  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:54:40.173963  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:54:40.220957  407512 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:40.220992  407512 cri.go:89] found id: ""
	I0108 22:54:40.221006  407512 logs.go:284] 1 containers: [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae]
	I0108 22:54:40.221076  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.227505  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:54:40.227587  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:54:40.286579  407512 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:40.286608  407512 cri.go:89] found id: ""
	I0108 22:54:40.286617  407512 logs.go:284] 1 containers: [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a]
	I0108 22:54:40.286687  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.296616  407512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:54:40.296707  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:54:40.352368  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:40.494811  407512 cri.go:89] found id: ""
	I0108 22:54:40.494849  407512 logs.go:284] 0 containers: []
	W0108 22:54:40.494861  407512 logs.go:286] No container was found matching "kindnet"
	I0108 22:54:40.494875  407512 logs.go:123] Gathering logs for kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] ...
	I0108 22:54:40.494896  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:40.533081  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:40.583921  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:40.650324  407512 logs.go:123] Gathering logs for etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] ...
	I0108 22:54:40.650372  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:40.806617  407512 logs.go:123] Gathering logs for kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] ...
	I0108 22:54:40.806662  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:40.862992  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:40.918840  407512 logs.go:123] Gathering logs for kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] ...
	I0108 22:54:40.918883  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:41.038263  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:41.060567  407512 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:54:41.060614  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:54:41.090070  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:41.351166  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:41.533810  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:41.575200  407512 logs.go:123] Gathering logs for container status ...
	I0108 22:54:41.575262  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:54:41.591682  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:41.770685  407512 logs.go:123] Gathering logs for kubelet ...
	I0108 22:54:41.770729  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:54:41.854438  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0108 22:54:41.911783  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:41.912008  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:41.936509  407512 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:54:41.936565  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:54:42.046925  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:42.099287  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:42.218600  407512 logs.go:123] Gathering logs for kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] ...
	I0108 22:54:42.218644  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:42.335092  407512 logs.go:123] Gathering logs for dmesg ...
	I0108 22:54:42.335147  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:54:42.360553  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:42.403165  407512 logs.go:123] Gathering logs for coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] ...
	I0108 22:54:42.403231  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:42.534984  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:42.544230  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:42.544273  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:54:42.544349  407512 out.go:239] X Problems detected in kubelet:
	W0108 22:54:42.544369  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:42.544383  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:42.544399  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:42.544409  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:42.598094  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:42.858951  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:43.030492  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:43.089635  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:43.350851  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:43.535968  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:43.585800  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:43.855097  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:44.038024  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:44.086132  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:44.359926  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:44.531963  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:44.586107  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:44.850657  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:45.031827  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:45.084636  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:45.349841  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:45.539034  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:45.589869  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:45.850847  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:46.030771  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:46.099243  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:46.350494  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:46.546703  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:46.625341  407512 kapi.go:107] duration metric: took 1m17.545710634s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 22:54:46.627584  407512 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-910124 cluster.
	I0108 22:54:46.629889  407512 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 22:54:46.631941  407512 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 22:54:46.860454  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:47.031260  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:47.349873  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:47.530763  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:47.850804  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:48.030866  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:48.350058  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:48.532835  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:48.850027  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:49.031306  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:49.352397  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:49.531446  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:49.849748  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:50.030728  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:50.351033  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:50.532223  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:50.850425  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:51.030718  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:51.349156  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:51.531587  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:51.858715  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:52.031104  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:52.351510  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:52.530860  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:52.557078  407512 system_pods.go:59] 18 kube-system pods found
	I0108 22:54:52.557129  407512 system_pods.go:61] "coredns-5dd5756b68-nlqgd" [f78d5853-fe43-42cb-b283-3cfabf7408f1] Running
	I0108 22:54:52.557137  407512 system_pods.go:61] "csi-hostpath-attacher-0" [a4346e4e-3ea8-445e-b2ad-5ba0bb33583c] Running
	I0108 22:54:52.557145  407512 system_pods.go:61] "csi-hostpath-resizer-0" [9ad15dcb-eb67-4e9c-b2a1-d8e0fdc73bec] Running
	I0108 22:54:52.557158  407512 system_pods.go:61] "csi-hostpathplugin-t58w7" [135b9d3b-3b61-4d16-beba-9b88351a4d5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 22:54:52.557167  407512 system_pods.go:61] "etcd-addons-910124" [a5756142-3dfa-4e20-a8cc-175b2e02fcab] Running
	I0108 22:54:52.557176  407512 system_pods.go:61] "kube-apiserver-addons-910124" [aaf01ae6-4110-4e46-b635-1785e8606696] Running
	I0108 22:54:52.557183  407512 system_pods.go:61] "kube-controller-manager-addons-910124" [8b30f0f7-629e-4ffd-8cd1-978c3a82dede] Running
	I0108 22:54:52.557191  407512 system_pods.go:61] "kube-ingress-dns-minikube" [1421ba58-25cc-45eb-b175-3febdab83a8e] Running
	I0108 22:54:52.557199  407512 system_pods.go:61] "kube-proxy-qzsv5" [5b398884-3550-4727-bf6e-9d10cd7e63ba] Running
	I0108 22:54:52.557217  407512 system_pods.go:61] "kube-scheduler-addons-910124" [ceb95c3e-4ec5-47dd-b38b-ae6fd7b62f1d] Running
	I0108 22:54:52.557224  407512 system_pods.go:61] "metrics-server-7c66d45ddc-fspmw" [e7812f80-df3d-4fc2-8430-9c7246f638f0] Running
	I0108 22:54:52.557234  407512 system_pods.go:61] "nvidia-device-plugin-daemonset-n8pqg" [22231673-96e3-48d4-a97e-9d77a615c63c] Running
	I0108 22:54:52.557242  407512 system_pods.go:61] "registry-5phsw" [886a9630-22c3-4d03-b42f-b2c1186c7c19] Running
	I0108 22:54:52.557252  407512 system_pods.go:61] "registry-proxy-br7js" [770ce618-3a9f-47a5-9070-e7364b2a564a] Running
	I0108 22:54:52.557261  407512 system_pods.go:61] "snapshot-controller-58dbcc7b99-b9rcb" [fe64aff3-259f-4596-bff9-821a4d91caa9] Running
	I0108 22:54:52.557268  407512 system_pods.go:61] "snapshot-controller-58dbcc7b99-db2j5" [a6367514-fb8f-4ce6-995c-3be39edd4eed] Running
	I0108 22:54:52.557275  407512 system_pods.go:61] "storage-provisioner" [c68caaf9-4a8b-49b7-8d56-414aabff20a5] Running
	I0108 22:54:52.557285  407512 system_pods.go:61] "tiller-deploy-7b677967b9-w9l5g" [d00ef7bc-d0f2-4fce-9757-1a825ca34ef8] Running
	I0108 22:54:52.557300  407512 system_pods.go:74] duration metric: took 12.640784835s to wait for pod list to return data ...
	I0108 22:54:52.557316  407512 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:54:52.560140  407512 default_sa.go:45] found service account: "default"
	I0108 22:54:52.560166  407512 default_sa.go:55] duration metric: took 2.839267ms for default service account to be created ...
	I0108 22:54:52.560178  407512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:54:52.576822  407512 system_pods.go:86] 18 kube-system pods found
	I0108 22:54:52.576878  407512 system_pods.go:89] "coredns-5dd5756b68-nlqgd" [f78d5853-fe43-42cb-b283-3cfabf7408f1] Running
	I0108 22:54:52.576888  407512 system_pods.go:89] "csi-hostpath-attacher-0" [a4346e4e-3ea8-445e-b2ad-5ba0bb33583c] Running
	I0108 22:54:52.576896  407512 system_pods.go:89] "csi-hostpath-resizer-0" [9ad15dcb-eb67-4e9c-b2a1-d8e0fdc73bec] Running
	I0108 22:54:52.576907  407512 system_pods.go:89] "csi-hostpathplugin-t58w7" [135b9d3b-3b61-4d16-beba-9b88351a4d5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 22:54:52.576916  407512 system_pods.go:89] "etcd-addons-910124" [a5756142-3dfa-4e20-a8cc-175b2e02fcab] Running
	I0108 22:54:52.576925  407512 system_pods.go:89] "kube-apiserver-addons-910124" [aaf01ae6-4110-4e46-b635-1785e8606696] Running
	I0108 22:54:52.576932  407512 system_pods.go:89] "kube-controller-manager-addons-910124" [8b30f0f7-629e-4ffd-8cd1-978c3a82dede] Running
	I0108 22:54:52.576940  407512 system_pods.go:89] "kube-ingress-dns-minikube" [1421ba58-25cc-45eb-b175-3febdab83a8e] Running
	I0108 22:54:52.576947  407512 system_pods.go:89] "kube-proxy-qzsv5" [5b398884-3550-4727-bf6e-9d10cd7e63ba] Running
	I0108 22:54:52.576953  407512 system_pods.go:89] "kube-scheduler-addons-910124" [ceb95c3e-4ec5-47dd-b38b-ae6fd7b62f1d] Running
	I0108 22:54:52.576960  407512 system_pods.go:89] "metrics-server-7c66d45ddc-fspmw" [e7812f80-df3d-4fc2-8430-9c7246f638f0] Running
	I0108 22:54:52.576967  407512 system_pods.go:89] "nvidia-device-plugin-daemonset-n8pqg" [22231673-96e3-48d4-a97e-9d77a615c63c] Running
	I0108 22:54:52.576975  407512 system_pods.go:89] "registry-5phsw" [886a9630-22c3-4d03-b42f-b2c1186c7c19] Running
	I0108 22:54:52.576986  407512 system_pods.go:89] "registry-proxy-br7js" [770ce618-3a9f-47a5-9070-e7364b2a564a] Running
	I0108 22:54:52.576994  407512 system_pods.go:89] "snapshot-controller-58dbcc7b99-b9rcb" [fe64aff3-259f-4596-bff9-821a4d91caa9] Running
	I0108 22:54:52.577003  407512 system_pods.go:89] "snapshot-controller-58dbcc7b99-db2j5" [a6367514-fb8f-4ce6-995c-3be39edd4eed] Running
	I0108 22:54:52.577016  407512 system_pods.go:89] "storage-provisioner" [c68caaf9-4a8b-49b7-8d56-414aabff20a5] Running
	I0108 22:54:52.577026  407512 system_pods.go:89] "tiller-deploy-7b677967b9-w9l5g" [d00ef7bc-d0f2-4fce-9757-1a825ca34ef8] Running
	I0108 22:54:52.577043  407512 system_pods.go:126] duration metric: took 16.855372ms to wait for k8s-apps to be running ...
	I0108 22:54:52.577058  407512 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:54:52.577135  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:54:52.611238  407512 system_svc.go:56] duration metric: took 34.170178ms WaitForService to wait for kubelet.
	I0108 22:54:52.611275  407512 kubeadm.go:581] duration metric: took 1m37.845228616s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:54:52.611303  407512 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:54:52.614675  407512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:54:52.614709  407512 node_conditions.go:123] node cpu capacity is 2
	I0108 22:54:52.614722  407512 node_conditions.go:105] duration metric: took 3.4125ms to run NodePressure ...
	I0108 22:54:52.614737  407512 start.go:228] waiting for startup goroutines ...
	I0108 22:54:52.850674  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:53.029892  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:53.349195  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:53.531640  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:53.849217  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:54.030335  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:54.355763  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:54.530249  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:54.858273  407512 kapi.go:107] duration metric: took 1m29.015439976s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 22:54:55.029156  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:55.530993  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:56.030758  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:56.530881  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:57.029281  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:57.531777  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:58.029856  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:58.530504  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:59.030719  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:59.531806  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:00.030204  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:00.531344  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:01.044209  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:01.530797  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:02.029480  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:02.531736  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:03.032748  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:03.533457  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:04.034180  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:04.531468  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:05.033996  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:05.530616  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:06.031073  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:06.531594  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:07.035861  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:07.532283  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:08.030125  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:08.531296  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:09.031229  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:09.531454  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:10.030870  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:10.530969  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:11.030914  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:11.529529  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:12.031141  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:12.530036  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:13.030404  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:13.531845  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:14.030636  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:14.535296  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:15.030542  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:15.531518  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:16.031226  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:16.530327  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:17.030177  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:17.530774  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:18.029547  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:18.530181  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:19.030288  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:19.531709  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:20.030508  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:20.531741  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:21.030578  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:21.530604  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:22.031179  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:22.529319  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:23.030021  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:23.530488  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:24.032685  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:24.529917  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:25.029980  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:25.529858  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:26.030726  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:26.531325  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:27.030956  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:27.529875  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:28.030165  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:28.530213  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:29.029862  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:29.530241  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:30.030062  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:30.529966  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:31.033626  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:31.531064  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:32.030102  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:32.530007  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:33.030906  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:33.533644  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:34.039131  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:34.531549  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:35.037737  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:35.530214  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:36.030102  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:36.531754  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:37.033179  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:37.529988  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:38.030378  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:38.530346  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:39.030514  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:39.529984  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:40.030420  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:40.530241  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:41.030883  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:41.532388  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:42.032023  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:42.530097  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:43.032262  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:43.530524  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:44.030854  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:44.531105  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:45.032862  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:45.534775  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:46.031402  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:46.534273  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:47.030758  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:47.530871  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:48.036485  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:48.532251  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:49.032267  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:49.530237  407512 kapi.go:107] duration metric: took 2m24.508042334s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 22:55:49.532237  407512 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0108 22:55:49.533766  407512 addons.go:508] enable addons completed in 2m35.485893061s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0108 22:55:49.533813  407512 start.go:233] waiting for cluster config update ...
	I0108 22:55:49.533848  407512 start.go:242] writing updated cluster config ...
	I0108 22:55:49.534192  407512 ssh_runner.go:195] Run: rm -f paused
	I0108 22:55:49.594206  407512 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:55:49.596027  407512 out.go:177] * Done! kubectl is now configured to use "addons-910124" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:52:27 UTC, ends at Mon 2024-01-08 22:56:07 UTC. --
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131227311Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131278007Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131325285Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@754854eab8c1c41bf733ba68c8bbae4cdc5806bd557d0c8c35f692d928489d75\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131369176Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131415456Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131465999Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131510730Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131558283Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131604268Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6d2a98b274382ca188ce121413dcafda936b250500089a622c3f2ce821ab9a69\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131659621Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131706407Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131752486Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131799877Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.131844787Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@311f90a3747fd333f687bc8ea3a1bdaa7f19aec377adedcefa818d241ee514f1\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.132818962Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.133437237Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\"" file="storage/storage_transport.go:185"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.135237718Z" level=debug msg="exporting opaque data as blob \"sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a\"" file="storage/storage_image.go:212"
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.137610951Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499 registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb],Size_:127226832,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232],Size_:123261750,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e3db313c6dbc065d4ac3
b32c7a6f2a878949031b881d217b63881a109c5cfba1,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32],Size_:61551410,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,RepoTags:[registry.k8s.io/kube-proxy:v1.28.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532],Size_:74749335,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34
c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.
io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},&Image{Id:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,RepoTags:[],RepoDigests:[registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21 registry.k8s.io/metrics-server/metrics-server@sha256:ee4304963fb035239bb5c5e8c10f2f38ee80efc16ecbdb9feb7213c17ae2e86e],Size_:70330870,Uid:&Int64Value{Val
ue:65534,},Username:,Spec:nil,},&Image{Id:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,RepoTags:[],RepoDigests:[docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310 docker.io/marcnuri/yakd@sha256:e65e169e9a45f0fa8c0bb25f979481f4ed561aab48df856cba042a75dd34b0a9],Size_:204075024,Uid:&Int64Value{Value:10001,},Username:,Spec:nil,},&Image{Id:d378d53ef198dac0270a2861e7752267d41db8b5bc6e33fb7376fd77122fa43c,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931],Size_:249356252,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f],Size_:188129131,Uid:nil,Us
ername:,Spec:nil,},&Image{Id:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7],Size_:57899101,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:909c3ff012b7f9fc4b802b73f250ad45e4ffa385299b71fdd6813f70a6711792,RepoTags:[],RepoDigests:[docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86 docker.io/library/registry@sha256:860f379a011eddfab604d9acfe3cf50b2d6e958026fb0f977132b0b083b1a3d7],Size_:25961051,Uid:nil,Username:,Spec:nil,},&Image{Id:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e27
25ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b],Size_:57303140,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:8cfc3f994a82b92969bf5521603a7f2815cc9a84857b3a888402e19a37423c4b,RepoTags:[],RepoDigests:[nvcr.io/nvidia/k8s-device-plugin@sha256:0153ba5eac2182064434f0101acce97ef512df59a32e1fbbdef12ca75c514e1e nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1],Size_:303559878,Uid:nil,Username:,Spec:nil,},&Image{Id:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c],Size_:56980232,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,RepoTags:[],RepoDigests:[registry.k8s.
io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80],Size_:55070573,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,},&Image{Id:754854eab8c1c41bf733ba68c8bbae4cdc5806bd557d0c8c35f692d928489d75,RepoTags:[],RepoDigests:[gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49 gcr.io/cloud-spanner-emulator/emulator@sha256:7e0a9c24dddd7ef923530c1f490ed6382a4e3c9f49e7be7a3cec849bf1bfc30f],Size_:125497816,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:d2fd211e7dcaaecc12a1c76088a88d83bd00bf716be19cef173392b68c5a3653,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5 gcr.io/k8s-minikube/kube-registry-proxy@sha256:f107ecd58728a2df5f2bb7e087f65f5363d0019b1e1fd476e4ef16065f44abfb],Size_:1465666
49,Uid:nil,Username:,Spec:nil,},&Image{Id:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280],Size_:54632579,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,RepoTags:[],RepoDigests:[ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f],Size_:88649672,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,},&Image{Id:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,RepoTags:[],RepoDigests:[docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef docker.io/rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246],Siz
e_:35264960,Uid:nil,Username:,Spec:nil,},&Image{Id:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c],Size_:21521620,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6d2a98b274382ca188ce121413dcafda936b250500089a622c3f2ce821ab9a69,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf],Size_:49800034,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,},&Image{Id:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dff
b8485cccc546be4efbaa14c9b22ea11 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5],Size_:37200280,Uid:nil,Username:,Spec:nil,},&Image{Id:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0],Size_:19577497,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8],Size_:60675705,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:738351fd438f02c0fa796f623f5ec066f743
1608d8c20524e0a109871454298c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5],Size_:57410185,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:311f90a3747fd333f687bc8ea3a1bdaa7f19aec377adedcefa818d241ee514f1,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75 registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e],Size_:256568209,Uid:nil,Username:www-data,Spec:nil,},&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
],Size_:4497096,Uid:nil,Username:,Spec:nil,},&Image{Id:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,RepoTags:[gcr.io/k8s-minikube/busybox:latest],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b],Size_:1462480,Uid:nil,Username:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=bfa911fc-3e11-4a74-8c96-d15eae59a2b2 name=/runtime.v1.ImageService/ListImages
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.155445791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7f686ebd-b901-4582-b6a2-96c128a1a77e name=/runtime.v1.RuntimeService/Version
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.155514947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7f686ebd-b901-4582-b6a2-96c128a1a77e name=/runtime.v1.RuntimeService/Version
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.157314337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=32825d6d-25c9-4df3-ab60-e5238d05cb86 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.158750122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704754567158725694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:464437,},InodesUsed:&UInt64Value{Value:199,},},},}" file="go-grpc-middleware/chain.go:25" id=32825d6d-25c9-4df3-ab60-e5238d05cb86 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.159503627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b80ea1ce-88b3-4b86-a8e6-31dbaddb13c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.159596379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b80ea1ce-88b3-4b86-a8e6-31dbaddb13c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:56:07 addons-910124 crio[712]: time="2024-01-08 22:56:07.160340812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29245ee2f29e49a0d06f478f1a6992c740ee1594eea7b10cbc477d4d84d7c4ea,PodSandboxId:eb5e9c191f3134d425868d9bbbe0e196a9b4cccbf007f281d44c9fb6fd75825b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},},ImageRef:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,State:CONTAINER_EXITED,CreatedAt:1704754559292789104,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-5a47dfec-d168-4824-b7d6-ab2a0c18ba84,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d385c563-fdf0-44bc-8c99-f683b5974787,},Annotations:map[string]string{io.kubernetes.container.hash: 1ac99ff,io.kub
ernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892,PodSandboxId:9a7242aee59aedcf2a54ae69ed549ca94bfb2c90ced26aa306ceaf90b46eee25,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,State:CONTAINER_RUNNING,CreatedAt:1704754548811713335,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-69cff4fd79-57x7q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0e3eadbf-d882-4751-84f1-43d0f065558c,},Annotations:map[strin
g]string{io.kubernetes.container.hash: e453bde5,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d62b32b0225d8652708461090f3ae7af05e2f7805bd33166695e725e72159d1f,PodSandboxId:0f18fac1cb259bbac9afa62f91e893482407bc57548bad44d4855d4762cde567,Metadata:&ContainerMetadata{Name:gadget,Attempt:3,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspe
ktor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce,State:CONTAINER_EXITED,CreatedAt:1704754533540126343,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-dg6l5,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ab57f458-54bf-4b04-abcd-172bd203b03e,},Annotations:map[string]string{io.kubernetes.container.hash: cc859a91,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a8e6a6de2d12f281956f883dac55b4a18da2eb657a9594baacb74c33266c0f,PodSandboxId:38a419163d74d941917d7cfe1514841fefabea8f4b53309a88ff6d3ec86adca3,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map
[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704754502084520381,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6xc4v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6d768c07-8c3b-4cfe-8396-5ef3ce9254e3,},Annotations:map[string]string{io.kubernetes.container.hash: 81fff38f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63ce05bf444dc65a4457eb0a0093c6de21a1fc0e607cae8edcb22bfec0d3dcd,PodSandboxId:ea6f39acb64041c5a5a7537dbda0b9a17b6d12af266cbdec6367ad1eba44f496,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661f
d7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1704754493662723902,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-t58w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9d3b-3b61-4d16-beba-9b88351a4d5c,},Annotations:map[string]string{io.kubernetes.container.hash: b8e9e052,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4900f098ebffaef5817058bf97d18f4118959d7734e9a0ce38e9ffa968c23d9a,PodSandboxId:ea6f39acb64041c5a5a7537dbda0b9a17b6d12af266cbdec6367ad1eba44f496,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-
storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1704754491565443701,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-t58w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9d3b-3b61-4d16-beba-9b88351a4d5c,},Annotations:map[string]string{io.kubernetes.container.hash: bde4b8ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52fe595d19340ca045ea1821d7ef0f91ed6816ac5ebc3930c37f27784443391f,PodSandboxId:ea6f39acb64041c5a5a7537dbda0b9a17b6d12af266cbdec6367ad1eba44f496,Metadata:&ContainerMetadata{Name:liveness-probe,Attem
pt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1704754489210149975,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-t58w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9d3b-3b61-4d16-beba-9b88351a4d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 69ca6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ad4963f73159aa1b00621f57e7df540b37124bd3fc9ad269d22d86a2cb6003c,PodSandboxId:ea6f39acb64041c5a5a7537dbda0b9a17b6d12af266cbdec6367ad1eba44f496,Metadata:
&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1704754487913367486,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-t58w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9d3b-3b61-4d16-beba-9b88351a4d5c,},Annotations:map[string]string{io.kubernetes.container.hash: b49fefb5,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfbb2b586824656bf
4ba646082f62f652c2f690cf25fdca17d0736897f19dc34,PodSandboxId:961bce878f46dca22aaa2d6f89e98257b02521d47a7076b4d0e0ce76d5aadf9b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704754486250218772,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-4wmbs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e71427ad-d27d-46a3-9de7-ebb6b117a0af,},Annotations:map[string]string{io.kubernetes.container.hash: adcf61c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6825938a883ca0d69bb236a09daf8c5b31dffb48a5ac3771bcc984a73668c59a,PodSandboxId:ea6f39acb64041c5a5a7537dbda0b9a17b6d12af266cbdec6367ad1eba44f496,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1704754484107635152,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-t58w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9d3b-3b61-4d16-beba-9b88351a4d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 50eef502,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d86be379877e9a08ac081cefe10602e6e96aedfa7e81db287da93f3b16bd8e3,PodSandboxId:35bff469d8882b48d84a467e16ce7b1c8922be0bb74326355539c0c86d8e3e9d,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1704754481550267934,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-b9rcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe64aff3-259f-4596-bff9-821a4d91caa9,},Annotations:map[string]string{io.kubern
etes.container.hash: 9aec6a18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e55da588631b91ab73cd3b8645cfbec5106d9fc75d4cc3f8ef1a8fc0c24569,PodSandboxId:a5dba5ba8a92264686c3990000f5df8681a1562b6a4ecef81508a64df3d06d5f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704754481669308875,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4t8bl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6cc6434-3350-4f77
-81c8-b323beb8d885,},Annotations:map[string]string{io.kubernetes.container.hash: 6ed7b832,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cb2ac6794210e92aa6ef12c52ca8730e4139b8b85f7e88e41a10c97e24465d,PodSandboxId:f3f9e64dec66c844d8d82cd88699a526288b4db582c46c60072f6a5c9cb288e8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1704754481435447955,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-9p6zt,io.kuber
netes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f0827a81-5f88-4007-b8e9-6d28060ac3ee,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb60fce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5557a617adeb561622ddecd9b321f41b77b33e532c6a5c5219ee7d5d68cdc54e,PodSandboxId:a0bd6e854698b730d71174a98b73b9b0853dbb678289c6253b4613feefd52686,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1704754476613391827,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-w9l5g,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: d00ef7bc-d0f2-4fce-9757-1a825ca34ef8,},Annotations:map[string]string{io.kubernetes.container.hash: b74b9419,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02839fcac5ca6c3856d544c4658365f3d400a9ac66034a71a95d624e3c615e62,PodSandboxId:3b587cabbd8c14c8ee9e20b400dcb3ff2614a7f6b1e11c1b27c175d85e0b4bb7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2
b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1704754473187355429,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-db2j5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6367514-fb8f-4ce6-995c-3be39edd4eed,},Annotations:map[string]string{io.kubernetes.container.hash: a520edb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d29b343574ca3793bbb556bb7b113a687c7d722fc7667edc7d56da773f7796c,PodSandboxId:4ade749b95a0466d1db9675292f9ce17b052f6c27a6618200bfebefd3d3ea9e9,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},Image
Ref:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1704754470402570106,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-br7js,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ce618-3a9f-47a5-9070-e7364b2a564a,},Annotations:map[string]string{io.kubernetes.container.hash: ce5a1bbd,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b06f34b50d399d1d4a619dbe2bd86d31b94fb8cc09d34f3384b0694d69a5caf1,PodSandboxId:ea6f39acb64041c5a5a7537dbda0b9a17b6d12af266cbdec6367ad1eba44f496,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:
0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1704754456708502940,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-t58w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 135b9d3b-3b61-4d16-beba-9b88351a4d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2168a381,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77779480dad8559502a544d1f41828f129351da41a867edb475046541bde1e52,PodSandboxId:5d
241a91282eef2f3e491ba694b738094df92dbb43a76400b67aa2a27a8b7d9f,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1704754445339053858,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4346e4e-3ea8-445e-b2ad-5ba0bb33583c,},Annotations:map[string]string{io.kubernetes.container.hash: 1ded2758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d335b3cfb0835b4edc1ee00b4ca8778961b740f6d9
0d988ac28e635ea65ece19,PodSandboxId:03b58b75139d9cdc2d98acaa4b1e6bbdbb2e967c9872b825e0a8f2f3c1578629,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,State:CONTAINER_RUNNING,CreatedAt:1704754443248285462,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-5phsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886a9630-22c3-4d03-b42f-b2c1186c7c19,},Annotations:map[string]string{io.kubernetes.container.hash: d038f329,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:f207408f227421da3cd414bc178d53a5b41db6f02223ef536728b2c2e47836e5,PodSandboxId:563100b3a3f05fa4b708efc4842973de6aa052791f30330f06cf80edebc103e9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINER_RUNNING,CreatedAt:1704754441266100426,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-7c66d45ddc-fspmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7812f80-df3d-4fc2-8430-9c7246f638f0,},Annotations:map[string]string{io.kubernetes.container.hash: a87e25e9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1919095aac1ff7b608936ab8a28482e7b6da3ecdd03c191becc7b1faacca2b7a,PodSandboxId:c792dd927867695552852971adb9ac206d53b629514e626b10a8f0c051cd473c,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1704754440061718387,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad15dcb-eb67-4e9c-b2a1-d8e0fdc73bec,},Annotations:map[string]string{io.kubernetes.container.hash: 78f5a7dd,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5,PodSandboxId:b2776d9d03d1148d1c86834b92f9a3aee8cecb34e828dbdadf90df68bdee9359,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1704754435699178218,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1421ba58-25cc-45eb-b175-3febdab83a8e,},Annotations:map[string]string{io.kubernetes.container.hash: 17261547,io.
kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465,PodSandboxId:a7736af30bf7630abd0019ac04ee65207b1273a0be72a1c029718261f65905a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704754417581289479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c68caaf9-4a8b-49b7-
8d56-414aabff20a5,},Annotations:map[string]string{io.kubernetes.container.hash: a6db657c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c5290454df47f080e78d601346f14c5fc9e28b1a34bd7ced142e2c13f451a0,PodSandboxId:173b22b3f6cbc657f9425e5c84b7f45b27c67ce10ccf9dd2de41b1f666a7fb27,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704754417499833705,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-d5pgh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8df1e3cb-5981-4ca0-81
78-2b3f4ef883db,},Annotations:map[string]string{io.kubernetes.container.hash: cfaf1fdf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae,PodSandboxId:488589f8ed1e4559b1299e36c63d7205877a657d9c7b431243025259ce339a3b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704754404961428563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzsv5,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 5b398884-3550-4727-bf6e-9d10cd7e63ba,},Annotations:map[string]string{io.kubernetes.container.hash: 55bb5e79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1558990c8dd707a0beaec89ba9e6656c8f0c85cc85fc81c0508d7795a77d34cf,PodSandboxId:563100b3a3f05fa4b708efc4842973de6aa052791f30330f06cf80edebc103e9,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21,State:CONTAINER_EXITED,CreatedAt:1704754410058437325,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.na
me: metrics-server-7c66d45ddc-fspmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7812f80-df3d-4fc2-8430-9c7246f638f0,},Annotations:map[string]string{io.kubernetes.container.hash: a87e25e9,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074,PodSandboxId:b39d6d9e87fade91c4d974f622545b04492d5a89a0d489a5629b96ff8bb1cf88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704754397343329787,Labels
:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nlqgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78d5853-fe43-42cb-b283-3cfabf7408f1,},Annotations:map[string]string{io.kubernetes.container.hash: 90a12478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e,PodSandboxId:f57afd6d60ec199e7063beb2b8051e39cc0fcb07c5e970f4ca56a5aa91abba70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,An
notations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704754372741453134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9513a9abfc4bec220ed857875c9d44,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a,PodSandboxId:c6fb9122e1d1304d69b0d61a57b3104e55425b0846394d289c4484ac2b974363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotati
ons:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704754372626515249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12296721335dc694685986b99e962f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842,PodSandboxId:dec7eaaf111e19534259afd1431dd50ca1c114c743d6da40120744d3fdf67bb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704754372598213243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c8e792aa9f76cb090c06a2a4f81415,},Annotations:map[string]string{io.kubernetes.container.hash: 5b8f5917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9,PodSandboxId:e2f8b537c3e136eb7ec6ac892e273577f2512da5c22d7d5842a9cfff2f7f14df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry
.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704754372267750926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3a287bd85c417eb3c4253cb1a5b935,},Annotations:map[string]string{io.kubernetes.container.hash: cc3be73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b80ea1ce-88b3-4b86-a8e6-31dbaddb13c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	29245ee2f29e4       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            8 seconds ago        Exited              helper-pod                               0                   eb5e9c191f313       helper-pod-create-pvc-5a47dfec-d168-4824-b7d6-ab2a0c18ba84
	c6f74bba0afca       registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75                             18 seconds ago       Running             controller                               0                   9a7242aee59ae       ingress-nginx-controller-69cff4fd79-57x7q
	d62b32b0225d8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce                            33 seconds ago       Exited              gadget                                   3                   0f18fac1cb259       gadget-dg6l5
	49a8e6a6de2d1       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             About a minute ago   Exited              patch                                    3                   38a419163d74d       ingress-nginx-admission-patch-6xc4v
	d63ce05bf444d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   ea6f39acb6404       csi-hostpathplugin-t58w7
	4900f098ebffa       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   ea6f39acb6404       csi-hostpathplugin-t58w7
	52fe595d19340       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   ea6f39acb6404       csi-hostpathplugin-t58w7
	6ad4963f73159       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   ea6f39acb6404       csi-hostpathplugin-t58w7
	cfbb2b5868246       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 About a minute ago   Running             gcp-auth                                 0                   961bce878f46d       gcp-auth-d4c87556c-4wmbs
	6825938a883ca       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   ea6f39acb6404       csi-hostpathplugin-t58w7
	c7e55da588631       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   About a minute ago   Exited              create                                   0                   a5dba5ba8a922       ingress-nginx-admission-create-4t8bl
	1d86be379877e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   35bff469d8882       snapshot-controller-58dbcc7b99-b9rcb
	96cb2ac679421       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   f3f9e64dec66c       local-path-provisioner-78b46b4d5c-9p6zt
	5557a617adeb5       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  About a minute ago   Running             tiller                                   0                   a0bd6e854698b       tiller-deploy-7b677967b9-w9l5g
	02839fcac5ca6       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   3b587cabbd8c1       snapshot-controller-58dbcc7b99-db2j5
	4d29b343574ca       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5                              About a minute ago   Running             registry-proxy                           0                   4ade749b95a04       registry-proxy-br7js
	b06f34b50d399       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   ea6f39acb6404       csi-hostpathplugin-t58w7
	77779480dad85       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago        Running             csi-attacher                             0                   5d241a91282ee       csi-hostpath-attacher-0
	d335b3cfb0835       docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86                                           2 minutes ago        Running             registry                                 0                   03b58b75139d9       registry-5phsw
	f207408f22742       a608c686bac931a5955f10a01b606f289af2b6fd9250e7c4eadc4a8117002c57                                                                             2 minutes ago        Running             metrics-server                           1                   563100b3a3f05       metrics-server-7c66d45ddc-fspmw
	1919095aac1ff       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago        Running             csi-resizer                              0                   c792dd9278676       csi-hostpath-resizer-0
	41b3661efb781       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   b2776d9d03d11       kube-ingress-dns-minikube
	f7f1cc8b30161       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   a7736af30bf76       storage-provisioner
	e5c5290454df4       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              2 minutes ago        Running             yakd                                     0                   173b22b3f6cbc       yakd-dashboard-9947fc6bf-d5pgh
	1558990c8dd70       registry.k8s.io/metrics-server/metrics-server@sha256:9f50dd170c1146f1da6a8bdf955c8aad35b4066097d847f94cd0377170d67d21                        2 minutes ago        Exited              metrics-server                           0                   563100b3a3f05       metrics-server-7c66d45ddc-fspmw
	22ca3f1305931       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             2 minutes ago        Running             kube-proxy                               0                   488589f8ed1e4       kube-proxy-qzsv5
	7c50c880fc226       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             2 minutes ago        Running             coredns                                  0                   b39d6d9e87fad       coredns-5dd5756b68-nlqgd
	4e23cb34099d4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             3 minutes ago        Running             kube-scheduler                           0                   f57afd6d60ec1       kube-scheduler-addons-910124
	22ebaab17be2d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             3 minutes ago        Running             kube-controller-manager                  0                   c6fb9122e1d13       kube-controller-manager-addons-910124
	bef86635ce9a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             3 minutes ago        Running             etcd                                     0                   dec7eaaf111e1       etcd-addons-910124
	c0f1ac0ede0f8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             3 minutes ago        Running             kube-apiserver                           0                   e2f8b537c3e13       kube-apiserver-addons-910124
	
	
	==> coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] <==
	[INFO] 10.244.0.8:36667 - 34584 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113742s
	[INFO] 10.244.0.8:57641 - 3133 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090329s
	[INFO] 10.244.0.8:57641 - 36403 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110642s
	[INFO] 10.244.0.8:45904 - 18040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059028s
	[INFO] 10.244.0.8:45904 - 48710 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070364s
	[INFO] 10.244.0.8:59578 - 34745 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094784s
	[INFO] 10.244.0.8:59578 - 11702 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144724s
	[INFO] 10.244.0.8:47995 - 13856 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001411604s
	[INFO] 10.244.0.8:47995 - 12583 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.003668703s
	[INFO] 10.244.0.8:37296 - 25870 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060587s
	[INFO] 10.244.0.8:37296 - 48906 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000215654s
	[INFO] 10.244.0.8:34377 - 4944 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065893s
	[INFO] 10.244.0.8:34377 - 42847 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000494295s
	[INFO] 10.244.0.8:59558 - 6025 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000296096s
	[INFO] 10.244.0.8:59558 - 23439 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00035732s
	[INFO] 10.244.0.20:47528 - 9674 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000307155s
	[INFO] 10.244.0.20:33784 - 12481 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000130813s
	[INFO] 10.244.0.20:56930 - 26736 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000398441s
	[INFO] 10.244.0.20:50939 - 53688 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000372579s
	[INFO] 10.244.0.20:34587 - 4671 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194431s
	[INFO] 10.244.0.20:51885 - 29875 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00021741s
	[INFO] 10.244.0.20:57307 - 36853 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000700935s
	[INFO] 10.244.0.20:56559 - 26685 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0009464s
	[INFO] 10.244.0.23:40299 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001810547s
	[INFO] 10.244.0.23:41785 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000369584s
	
	
	==> describe nodes <==
	Name:               addons-910124
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-910124
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=addons-910124
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_53_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-910124
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-910124"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:52:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-910124
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:56:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:56:05 +0000   Mon, 08 Jan 2024 22:52:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:56:05 +0000   Mon, 08 Jan 2024 22:52:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:56:05 +0000   Mon, 08 Jan 2024 22:52:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:56:05 +0000   Mon, 08 Jan 2024 22:53:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-910124
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf8c958003dc42c1a0fb654d3ae6456a
	  System UUID:                cf8c9580-03dc-42c1-a0fb-654d3ae6456a
	  Boot ID:                    effe1175-b51f-4c81-986a-8be7ea71e2c1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     test-local-path                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  gadget                      gadget-dg6l5                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  gcp-auth                    gcp-auth-d4c87556c-4wmbs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  headlamp                    headlamp-7ddfbb94ff-sfj86                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-57x7q    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-5dd5756b68-nlqgd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m53s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 csi-hostpathplugin-t58w7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 etcd-addons-910124                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-apiserver-addons-910124                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 kube-controller-manager-addons-910124        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-qzsv5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-scheduler-addons-910124                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 metrics-server-7c66d45ddc-fspmw              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m45s
	  kube-system                 registry-5phsw                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 registry-proxy-br7js                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 snapshot-controller-58dbcc7b99-b9rcb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 snapshot-controller-58dbcc7b99-db2j5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 tiller-deploy-7b677967b9-w9l5g               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  local-path-storage          local-path-provisioner-78b46b4d5c-9p6zt      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-d5pgh               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m32s  kube-proxy       
	  Normal  Starting                 3m7s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m6s   kubelet          Node addons-910124 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s   kubelet          Node addons-910124 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s   kubelet          Node addons-910124 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s   kubelet          Node addons-910124 status is now: NodeReady
	  Normal  RegisteredNode           2m54s  node-controller  Node addons-910124 event: Registered Node addons-910124 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.101246] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.630820] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.963270] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.163221] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.142462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.404427] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.122483] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.164268] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.119928] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.248676] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +11.211568] systemd-fstab-generator[904]: Ignoring "noauto" for root device
	[ +10.302806] systemd-fstab-generator[1240]: Ignoring "noauto" for root device
	[Jan 8 22:53] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.809775] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.281880] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.733317] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 22:54] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.068029] kauditd_printk_skb: 24 callbacks suppressed
	[Jan 8 22:55] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.907431] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 22:56] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] <==
	{"level":"warn","ts":"2024-01-08T22:54:13.201675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.898502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-08T22:54:13.201728Z","caller":"traceutil/trace.go:171","msg":"trace[203461593] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:954; }","duration":"122.953774ms","start":"2024-01-08T22:54:13.078768Z","end":"2024-01-08T22:54:13.201722Z","steps":["trace[203461593] 'agreement among raft nodes before linearized reading'  (duration: 122.87495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:54:13.201839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.428859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-08T22:54:13.201875Z","caller":"traceutil/trace.go:171","msg":"trace[823684303] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:954; }","duration":"178.466039ms","start":"2024-01-08T22:54:13.023404Z","end":"2024-01-08T22:54:13.20187Z","steps":["trace[823684303] 'agreement among raft nodes before linearized reading'  (duration: 178.406025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:54:13.202054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.556888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:54:13.213583Z","caller":"traceutil/trace.go:171","msg":"trace[302359390] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:954; }","duration":"200.075656ms","start":"2024-01-08T22:54:13.013488Z","end":"2024-01-08T22:54:13.213564Z","steps":["trace[302359390] 'agreement among raft nodes before linearized reading'  (duration: 188.546661ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:27.740029Z","caller":"traceutil/trace.go:171","msg":"trace[1790739539] transaction","detail":"{read_only:false; response_revision:996; number_of_response:1; }","duration":"419.556224ms","start":"2024-01-08T22:54:27.320409Z","end":"2024-01-08T22:54:27.739965Z","steps":["trace[1790739539] 'process raft request'  (duration: 419.390442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:54:27.740296Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:54:27.320392Z","time spent":"419.82438ms","remote":"127.0.0.1:35792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:994 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-08T22:54:27.740677Z","caller":"traceutil/trace.go:171","msg":"trace[1780452644] linearizableReadLoop","detail":"{readStateIndex:1028; appliedIndex:1028; }","duration":"398.289153ms","start":"2024-01-08T22:54:27.342378Z","end":"2024-01-08T22:54:27.740667Z","steps":["trace[1780452644] 'read index received'  (duration: 398.286624ms)","trace[1780452644] 'applied index is now lower than readState.Index'  (duration: 1.947µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:54:27.742022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"399.642546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82052"}
	{"level":"info","ts":"2024-01-08T22:54:27.743064Z","caller":"traceutil/trace.go:171","msg":"trace[771455792] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:996; }","duration":"400.699476ms","start":"2024-01-08T22:54:27.342353Z","end":"2024-01-08T22:54:27.743053Z","steps":["trace[771455792] 'agreement among raft nodes before linearized reading'  (duration: 399.429101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:54:27.743144Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:54:27.34234Z","time spent":"400.792497ms","remote":"127.0.0.1:35796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":82075,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-01-08T22:54:27.742364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.797645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82052"}
	{"level":"info","ts":"2024-01-08T22:54:27.746136Z","caller":"traceutil/trace.go:171","msg":"trace[412743207] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:996; }","duration":"291.567114ms","start":"2024-01-08T22:54:27.454553Z","end":"2024-01-08T22:54:27.74612Z","steps":["trace[412743207] 'agreement among raft nodes before linearized reading'  (duration: 287.705899ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:54:27.742396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.526604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-01-08T22:54:27.742428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.281807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"warn","ts":"2024-01-08T22:54:27.742464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.9388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14030"}
	{"level":"info","ts":"2024-01-08T22:54:27.749504Z","caller":"traceutil/trace.go:171","msg":"trace[803257668] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:996; }","duration":"143.624285ms","start":"2024-01-08T22:54:27.605861Z","end":"2024-01-08T22:54:27.749485Z","steps":["trace[803257668] 'agreement among raft nodes before linearized reading'  (duration: 136.519239ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:27.749661Z","caller":"traceutil/trace.go:171","msg":"trace[135481271] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:996; }","duration":"170.506332ms","start":"2024-01-08T22:54:27.579143Z","end":"2024-01-08T22:54:27.749649Z","steps":["trace[135481271] 'agreement among raft nodes before linearized reading'  (duration: 163.26193ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:27.749727Z","caller":"traceutil/trace.go:171","msg":"trace[1488137472] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:996; }","duration":"225.19982ms","start":"2024-01-08T22:54:27.524521Z","end":"2024-01-08T22:54:27.749721Z","steps":["trace[1488137472] 'agreement among raft nodes before linearized reading'  (duration: 217.913133ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:37.957082Z","caller":"traceutil/trace.go:171","msg":"trace[1291385949] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"104.593351ms","start":"2024-01-08T22:54:37.852473Z","end":"2024-01-08T22:54:37.957067Z","steps":["trace[1291385949] 'process raft request'  (duration: 104.330985ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:03.947215Z","caller":"traceutil/trace.go:171","msg":"trace[1634347057] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"186.372781ms","start":"2024-01-08T22:55:03.760232Z","end":"2024-01-08T22:55:03.946605Z","steps":["trace[1634347057] 'process raft request'  (duration: 185.914295ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:03.951136Z","caller":"traceutil/trace.go:171","msg":"trace[274724205] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"143.044638ms","start":"2024-01-08T22:55:03.808073Z","end":"2024-01-08T22:55:03.951117Z","steps":["trace[274724205] 'process raft request'  (duration: 142.963483ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:03.951196Z","caller":"traceutil/trace.go:171","msg":"trace[389919165] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"177.431888ms","start":"2024-01-08T22:55:03.773743Z","end":"2024-01-08T22:55:03.951175Z","steps":["trace[389919165] 'process raft request'  (duration: 177.065685ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:45.856727Z","caller":"traceutil/trace.go:171","msg":"trace[476128016] transaction","detail":"{read_only:false; response_revision:1264; number_of_response:1; }","duration":"229.432044ms","start":"2024-01-08T22:55:45.62716Z","end":"2024-01-08T22:55:45.856592Z","steps":["trace[476128016] 'process raft request'  (duration: 229.052484ms)"],"step_count":1}
	
	
	==> gcp-auth [cfbb2b586824656bf4ba646082f62f652c2f690cf25fdca17d0736897f19dc34] <==
	2024/01/08 22:54:46 GCP Auth Webhook started!
	2024/01/08 22:55:55 Ready to marshal response ...
	2024/01/08 22:55:55 Ready to write response ...
	2024/01/08 22:55:56 Ready to marshal response ...
	2024/01/08 22:55:56 Ready to write response ...
	2024/01/08 22:55:59 Ready to marshal response ...
	2024/01/08 22:55:59 Ready to write response ...
	2024/01/08 22:56:05 Ready to marshal response ...
	2024/01/08 22:56:05 Ready to write response ...
	2024/01/08 22:56:05 Ready to marshal response ...
	2024/01/08 22:56:05 Ready to write response ...
	2024/01/08 22:56:05 Ready to marshal response ...
	2024/01/08 22:56:05 Ready to write response ...
	
	
	==> kernel <==
	 22:56:08 up 3 min,  0 users,  load average: 3.93, 3.12, 1.35
	Linux addons-910124 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] <==
	E0108 22:53:24.748644       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:53:24.749579       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:53:25.270828       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.102.9.45"}
	I0108 22:53:25.292317       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0108 22:53:25.642560       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.111.188.171"}
	W0108 22:53:26.474358       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0108 22:53:27.193013       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:53:28.640545       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.135.36"}
	I0108 22:53:28.745297       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:53:33.746348       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:53:56.943691       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0108 22:54:02.480610       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.24.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.24.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.24.58:443: connect: connection refused
	W0108 22:54:02.480818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:54:02.481691       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0108 22:54:02.481602       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.24.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.24.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.24.58:443: connect: connection refused
	I0108 22:54:02.483383       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0108 22:54:02.487140       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.24.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.24.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.24.58:443: connect: connection refused
	I0108 22:54:02.661810       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:54:56.949493       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:55:56.953633       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:56:05.355878       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.204.1"}
	
	
	==> kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] <==
	I0108 22:55:04.894222       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 22:55:05.000774       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 22:55:05.017391       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 22:55:05.029975       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 22:55:05.030092       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0108 22:55:05.197168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="8.321201ms"
	I0108 22:55:05.197359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="66.505µs"
	I0108 22:55:49.352539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="202.518µs"
	I0108 22:55:49.828371       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:55:49.828867       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:55:55.752162       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0108 22:55:55.962546       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:55:55.965293       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:55:58.444523       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:55:58.444638       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:56:00.107529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="19.392184ms"
	I0108 22:56:00.107732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="116.075µs"
	I0108 22:56:03.045741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-64c8c85f65" duration="14.828µs"
	I0108 22:56:05.397615       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-7ddfbb94ff to 1"
	I0108 22:56:05.479784       1 event.go:307] "Event occurred" object="headlamp/headlamp-7ddfbb94ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-7ddfbb94ff-sfj86"
	I0108 22:56:05.499464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="102.083028ms"
	I0108 22:56:05.562379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="52.492638ms"
	I0108 22:56:05.562497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="62.38µs"
	I0108 22:56:05.628628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="77.27µs"
	I0108 22:56:07.225092       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] <==
	I0108 22:53:34.021779       1 server_others.go:69] "Using iptables proxy"
	I0108 22:53:34.185089       1 node.go:141] Successfully retrieved node IP: 192.168.39.129
	I0108 22:53:34.978246       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:53:34.978300       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:53:35.075215       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:53:35.075326       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:53:35.075574       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:53:35.075624       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:53:35.138842       1 config.go:188] "Starting service config controller"
	I0108 22:53:35.149179       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:53:35.149247       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:53:35.149267       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:53:35.178571       1 config.go:315] "Starting node config controller"
	I0108 22:53:35.178619       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:53:35.255993       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:53:35.256128       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:53:35.281030       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] <==
	W0108 22:52:58.034169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:52:58.034306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:52:58.052296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:52:58.052394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:52:58.126022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:52:58.126086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:52:58.131868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:52:58.131990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:52:58.147529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:52:58.147634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:52:58.193292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:52:58.193417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:52:58.227354       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:52:58.227700       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:52:58.227606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:52:58.227829       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:52:58.274706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:58.274771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:58.423303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:52:58.423397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:52:58.518200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:58.518306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:58.562190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:52:58.562242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 22:52:59.875800       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:52:27 UTC, ends at Mon 2024-01-08 22:56:08 UTC. --
	Jan 08 22:56:03 addons-910124 kubelet[1247]: I0108 22:56:03.687304    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pdndc\" (UniqueName: \"kubernetes.io/projected/c105ca62-3293-4681-aa1a-1a25a0f68530-kube-api-access-pdndc\") on node \"addons-910124\" DevicePath \"\""
	Jan 08 22:56:04 addons-910124 kubelet[1247]: I0108 22:56:04.495597    1247 scope.go:117] "RemoveContainer" containerID="b7353a53c09c02597753a18188ec49211866a0034dbb2be494a97413181b0341"
	Jan 08 22:56:04 addons-910124 kubelet[1247]: I0108 22:56:04.899787    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g4d4\" (UniqueName: \"kubernetes.io/projected/0fff942f-0af4-43aa-8277-870898d1bbe5-kube-api-access-7g4d4\") pod \"0fff942f-0af4-43aa-8277-870898d1bbe5\" (UID: \"0fff942f-0af4-43aa-8277-870898d1bbe5\") "
	Jan 08 22:56:04 addons-910124 kubelet[1247]: I0108 22:56:04.899862    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fff942f-0af4-43aa-8277-870898d1bbe5-gcp-creds\") pod \"0fff942f-0af4-43aa-8277-870898d1bbe5\" (UID: \"0fff942f-0af4-43aa-8277-870898d1bbe5\") "
	Jan 08 22:56:04 addons-910124 kubelet[1247]: I0108 22:56:04.900060    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff942f-0af4-43aa-8277-870898d1bbe5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0fff942f-0af4-43aa-8277-870898d1bbe5" (UID: "0fff942f-0af4-43aa-8277-870898d1bbe5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 08 22:56:04 addons-910124 kubelet[1247]: I0108 22:56:04.916867    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fff942f-0af4-43aa-8277-870898d1bbe5-kube-api-access-7g4d4" (OuterVolumeSpecName: "kube-api-access-7g4d4") pod "0fff942f-0af4-43aa-8277-870898d1bbe5" (UID: "0fff942f-0af4-43aa-8277-870898d1bbe5"). InnerVolumeSpecName "kube-api-access-7g4d4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.001161    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7g4d4\" (UniqueName: \"kubernetes.io/projected/0fff942f-0af4-43aa-8277-870898d1bbe5-kube-api-access-7g4d4\") on node \"addons-910124\" DevicePath \"\""
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.001192    1247 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0fff942f-0af4-43aa-8277-870898d1bbe5-gcp-creds\") on node \"addons-910124\" DevicePath \"\""
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.058250    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0fff942f-0af4-43aa-8277-870898d1bbe5" path="/var/lib/kubelet/pods/0fff942f-0af4-43aa-8277-870898d1bbe5/volumes"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.058771    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c105ca62-3293-4681-aa1a-1a25a0f68530" path="/var/lib/kubelet/pods/c105ca62-3293-4681-aa1a-1a25a0f68530/volumes"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.509556    1247 scope.go:117] "RemoveContainer" containerID="9640e599e7dc05bf05bb5b74a73f9ef578c9834bbc019f7c34902c09bad052b8"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.533245    1247 topology_manager.go:215] "Topology Admit Handler" podUID="c595560c-8e3d-4723-8840-ad6fe139c985" podNamespace="headlamp" podName="headlamp-7ddfbb94ff-sfj86"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: E0108 22:56:05.533612    1247 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0fff942f-0af4-43aa-8277-870898d1bbe5" containerName="registry-test"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: E0108 22:56:05.533639    1247 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c105ca62-3293-4681-aa1a-1a25a0f68530" containerName="cloud-spanner-emulator"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.533678    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="c105ca62-3293-4681-aa1a-1a25a0f68530" containerName="cloud-spanner-emulator"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.533720    1247 memory_manager.go:346] "RemoveStaleState removing state" podUID="0fff942f-0af4-43aa-8277-870898d1bbe5" containerName="registry-test"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.606262    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c595560c-8e3d-4723-8840-ad6fe139c985-gcp-creds\") pod \"headlamp-7ddfbb94ff-sfj86\" (UID: \"c595560c-8e3d-4723-8840-ad6fe139c985\") " pod="headlamp/headlamp-7ddfbb94ff-sfj86"
	Jan 08 22:56:05 addons-910124 kubelet[1247]: I0108 22:56:05.606359    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkf6w\" (UniqueName: \"kubernetes.io/projected/c595560c-8e3d-4723-8840-ad6fe139c985-kube-api-access-wkf6w\") pod \"headlamp-7ddfbb94ff-sfj86\" (UID: \"c595560c-8e3d-4723-8840-ad6fe139c985\") " pod="headlamp/headlamp-7ddfbb94ff-sfj86"
	Jan 08 22:56:07 addons-910124 kubelet[1247]: I0108 22:56:07.050655    1247 scope.go:117] "RemoveContainer" containerID="d62b32b0225d8652708461090f3ae7af05e2f7805bd33166695e725e72159d1f"
	Jan 08 22:56:07 addons-910124 kubelet[1247]: E0108 22:56:07.055352    1247 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-dg6l5_gadget(ab57f458-54bf-4b04-abcd-172bd203b03e)\"" pod="gadget/gadget-dg6l5" podUID="ab57f458-54bf-4b04-abcd-172bd203b03e"
	Jan 08 22:56:08 addons-910124 kubelet[1247]: I0108 22:56:08.076352    1247 topology_manager.go:215] "Topology Admit Handler" podUID="cb954aae-f170-4c74-a1a6-5770cc9fe910" podNamespace="default" podName="task-pv-pod"
	Jan 08 22:56:08 addons-910124 kubelet[1247]: I0108 22:56:08.228483    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-838993e9-8f14-4313-9980-01e166fc3d0f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1b9f8dfd-ae79-11ee-95b2-be6e6a3522a8\") pod \"task-pv-pod\" (UID: \"cb954aae-f170-4c74-a1a6-5770cc9fe910\") " pod="default/task-pv-pod"
	Jan 08 22:56:08 addons-910124 kubelet[1247]: I0108 22:56:08.228591    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cb954aae-f170-4c74-a1a6-5770cc9fe910-gcp-creds\") pod \"task-pv-pod\" (UID: \"cb954aae-f170-4c74-a1a6-5770cc9fe910\") " pod="default/task-pv-pod"
	Jan 08 22:56:08 addons-910124 kubelet[1247]: I0108 22:56:08.228643    1247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tts2w\" (UniqueName: \"kubernetes.io/projected/cb954aae-f170-4c74-a1a6-5770cc9fe910-kube-api-access-tts2w\") pod \"task-pv-pod\" (UID: \"cb954aae-f170-4c74-a1a6-5770cc9fe910\") " pod="default/task-pv-pod"
	Jan 08 22:56:08 addons-910124 kubelet[1247]: I0108 22:56:08.348197    1247 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-838993e9-8f14-4313-9980-01e166fc3d0f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1b9f8dfd-ae79-11ee-95b2-be6e6a3522a8\") pod \"task-pv-pod\" (UID: \"cb954aae-f170-4c74-a1a6-5770cc9fe910\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/d3335d277a2109f79e892371bbe4280086851e51d4b3e75388340078549d9fad/globalmount\"" pod="default/task-pv-pod"
	
	
	==> storage-provisioner [f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465] <==
	I0108 22:53:39.547454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:53:39.666517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:53:39.666631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:53:39.679366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:53:39.679657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-910124_79e8042e-46a0-425c-adf1-34bf527648cb!
	I0108 22:53:39.832324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8b7bb6d-9254-4075-ad69-e63df017e5f5", APIVersion:"v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-910124_79e8042e-46a0-425c-adf1-34bf527648cb became leader
	I0108 22:53:40.083764       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-910124_79e8042e-46a0-425c-adf1-34bf527648cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-910124 -n addons-910124
helpers_test.go:261: (dbg) Run:  kubectl --context addons-910124 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: task-pv-pod test-local-path headlamp-7ddfbb94ff-sfj86 ingress-nginx-admission-create-4t8bl ingress-nginx-admission-patch-6xc4v
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-910124 describe pod task-pv-pod test-local-path headlamp-7ddfbb94ff-sfj86 ingress-nginx-admission-create-4t8bl ingress-nginx-admission-patch-6xc4v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-910124 describe pod task-pv-pod test-local-path headlamp-7ddfbb94ff-sfj86 ingress-nginx-admission-create-4t8bl ingress-nginx-admission-patch-6xc4v: exit status 1 (105.68802ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-910124/192.168.39.129
	Start Time:       Mon, 08 Jan 2024 22:56:08 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tts2w (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-tts2w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/task-pv-pod to addons-910124
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-910124/192.168.39.129
	Start Time:       Mon, 08 Jan 2024 22:56:02 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  busybox:
	    Container ID:  cri-o://d602d7712ebcb27ee8ddd5be7665911b6ab92ac79e3544b91f0097a5c45c7d43
	    Image:         busybox:stable
	    Image ID:      docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Jan 2024 22:56:08 +0000
	      Finished:     Mon, 08 Jan 2024 22:56:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xcrsj (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xcrsj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/test-local-path to addons-910124
	  Normal  Pulling    5s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "busybox:stable" in 3.245s (3.246s including waiting)
	  Normal  Created    1s    kubelet            Created container busybox
	  Normal  Started    1s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-7ddfbb94ff-sfj86" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-4t8bl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6xc4v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-910124 describe pod task-pv-pod test-local-path headlamp-7ddfbb94ff-sfj86 ingress-nginx-admission-create-4t8bl ingress-nginx-admission-patch-6xc4v: exit status 1
--- FAIL: TestAddons/parallel/Registry (19.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-910124 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-910124 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-910124 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b36765f8-4cc3-464b-a1d8-aac9847d8391] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b36765f8-4cc3-464b-a1d8-aac9847d8391] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.00637049s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910124 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.137640825s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-910124 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.129
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 addons disable ingress-dns --alsologtostderr -v=1: (1.397883134s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 addons disable ingress --alsologtostderr -v=1: (8.126306594s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-910124 -n addons-910124
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 logs -n 25: (1.614040497s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | -p download-only-138294                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| delete  | -p download-only-138294                                                                     | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| delete  | -p download-only-138294                                                                     | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-576323 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | binary-mirror-576323                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42563                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-576323                                                                     | binary-mirror-576323 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:52 UTC |
	| addons  | enable dashboard -p                                                                         | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | addons-910124                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | addons-910124                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-910124 --wait=true                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC | 08 Jan 24 22:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:55 UTC | 08 Jan 24 22:55 UTC |
	|         | -p addons-910124                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | addons-910124                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | -p addons-910124                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-910124 ip                                                                            | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	| addons  | addons-910124 addons disable                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC |                     |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-910124 ssh cat                                                                       | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | /opt/local-path-provisioner/pvc-5a47dfec-d168-4824-b7d6-ab2a0c18ba84_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-910124 addons disable                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | addons-910124                                                                               |                      |         |         |                     |                     |
	| addons  | addons-910124 addons                                                                        | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-910124 addons disable                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-910124 ssh curl -s                                                                   | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-910124 addons                                                                        | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-910124 addons                                                                        | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:57 UTC | 08 Jan 24 22:57 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-910124 ip                                                                            | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:58 UTC | 08 Jan 24 22:58 UTC |
	| addons  | addons-910124 addons disable                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:58 UTC | 08 Jan 24 22:58 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-910124 addons disable                                                                | addons-910124        | jenkins | v1.32.0 | 08 Jan 24 22:58 UTC | 08 Jan 24 22:59 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:52:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:52:13.382889  407512 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:52:13.383046  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:13.383052  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:52:13.383056  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:13.383252  407512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 22:52:13.383931  407512 out.go:303] Setting JSON to false
	I0108 22:52:13.384848  407512 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12859,"bootTime":1704741474,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:52:13.384973  407512 start.go:138] virtualization: kvm guest
	I0108 22:52:13.387433  407512 out.go:177] * [addons-910124] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:52:13.389066  407512 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 22:52:13.390347  407512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:52:13.389147  407512 notify.go:220] Checking for updates...
	I0108 22:52:13.391854  407512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:52:13.393313  407512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:13.394637  407512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:52:13.395949  407512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:52:13.397472  407512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:52:13.434904  407512 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:52:13.436211  407512 start.go:298] selected driver: kvm2
	I0108 22:52:13.436234  407512 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:52:13.436250  407512 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:52:13.437005  407512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:13.437103  407512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:52:13.454531  407512 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:52:13.454588  407512 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:52:13.454846  407512 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:52:13.454928  407512 cni.go:84] Creating CNI manager for ""
	I0108 22:52:13.454942  407512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:52:13.454952  407512 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:52:13.454973  407512 start_flags.go:323] config:
	{Name:addons-910124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:13.455474  407512 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:13.457737  407512 out.go:177] * Starting control plane node addons-910124 in cluster addons-910124
	I0108 22:52:13.459865  407512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:13.459927  407512 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:52:13.459943  407512 cache.go:56] Caching tarball of preloaded images
	I0108 22:52:13.460053  407512 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 22:52:13.460065  407512 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:52:13.460482  407512 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/config.json ...
	I0108 22:52:13.460516  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/config.json: {Name:mkb106a6a83962c00c178961d9c58cf64f36e4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:13.460693  407512 start.go:365] acquiring machines lock for addons-910124: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:52:13.460754  407512 start.go:369] acquired machines lock for "addons-910124" in 43.497µs
	I0108 22:52:13.460781  407512 start.go:93] Provisioning new machine with config: &{Name:addons-910124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:52:13.460883  407512 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:52:13.463549  407512 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 22:52:13.463742  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:52:13.463778  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:52:13.479313  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0108 22:52:13.480214  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:52:13.481142  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:52:13.481168  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:52:13.481896  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:52:13.482128  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:13.482314  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:13.482491  407512 start.go:159] libmachine.API.Create for "addons-910124" (driver="kvm2")
	I0108 22:52:13.482534  407512 client.go:168] LocalClient.Create starting
	I0108 22:52:13.482586  407512 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0108 22:52:13.742583  407512 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0108 22:52:13.886416  407512 main.go:141] libmachine: Running pre-create checks...
	I0108 22:52:13.886452  407512 main.go:141] libmachine: (addons-910124) Calling .PreCreateCheck
	I0108 22:52:13.887083  407512 main.go:141] libmachine: (addons-910124) Calling .GetConfigRaw
	I0108 22:52:13.887782  407512 main.go:141] libmachine: Creating machine...
	I0108 22:52:13.887806  407512 main.go:141] libmachine: (addons-910124) Calling .Create
	I0108 22:52:13.888019  407512 main.go:141] libmachine: (addons-910124) Creating KVM machine...
	I0108 22:52:13.889641  407512 main.go:141] libmachine: (addons-910124) DBG | found existing default KVM network
	I0108 22:52:13.890775  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:13.890469  407534 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a50}
	I0108 22:52:13.896745  407512 main.go:141] libmachine: (addons-910124) DBG | trying to create private KVM network mk-addons-910124 192.168.39.0/24...
	I0108 22:52:13.985496  407512 main.go:141] libmachine: (addons-910124) DBG | private KVM network mk-addons-910124 192.168.39.0/24 created
	I0108 22:52:13.985547  407512 main.go:141] libmachine: (addons-910124) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124 ...
	I0108 22:52:13.985563  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:13.985450  407534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:13.985582  407512 main.go:141] libmachine: (addons-910124) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 22:52:13.985692  407512 main.go:141] libmachine: (addons-910124) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 22:52:14.239725  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:14.239579  407534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa...
	I0108 22:52:14.297213  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:14.297082  407534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/addons-910124.rawdisk...
	I0108 22:52:14.297262  407512 main.go:141] libmachine: (addons-910124) DBG | Writing magic tar header
	I0108 22:52:14.297279  407512 main.go:141] libmachine: (addons-910124) DBG | Writing SSH key tar header
	I0108 22:52:14.297340  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:14.297276  407534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124 ...
	I0108 22:52:14.297378  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124
	I0108 22:52:14.297403  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0108 22:52:14.297435  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124 (perms=drwx------)
	I0108 22:52:14.297457  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:14.297472  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0108 22:52:14.297484  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:52:14.297495  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:52:14.297506  407512 main.go:141] libmachine: (addons-910124) DBG | Checking permissions on dir: /home
	I0108 22:52:14.297580  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:52:14.297616  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0108 22:52:14.297631  407512 main.go:141] libmachine: (addons-910124) DBG | Skipping /home - not owner
	I0108 22:52:14.297647  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0108 22:52:14.297659  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:52:14.297672  407512 main.go:141] libmachine: (addons-910124) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:52:14.297684  407512 main.go:141] libmachine: (addons-910124) Creating domain...
	I0108 22:52:14.298759  407512 main.go:141] libmachine: (addons-910124) define libvirt domain using xml: 
	I0108 22:52:14.298793  407512 main.go:141] libmachine: (addons-910124) <domain type='kvm'>
	I0108 22:52:14.298802  407512 main.go:141] libmachine: (addons-910124)   <name>addons-910124</name>
	I0108 22:52:14.298808  407512 main.go:141] libmachine: (addons-910124)   <memory unit='MiB'>4000</memory>
	I0108 22:52:14.298814  407512 main.go:141] libmachine: (addons-910124)   <vcpu>2</vcpu>
	I0108 22:52:14.298820  407512 main.go:141] libmachine: (addons-910124)   <features>
	I0108 22:52:14.298825  407512 main.go:141] libmachine: (addons-910124)     <acpi/>
	I0108 22:52:14.298831  407512 main.go:141] libmachine: (addons-910124)     <apic/>
	I0108 22:52:14.298841  407512 main.go:141] libmachine: (addons-910124)     <pae/>
	I0108 22:52:14.298849  407512 main.go:141] libmachine: (addons-910124)     
	I0108 22:52:14.298884  407512 main.go:141] libmachine: (addons-910124)   </features>
	I0108 22:52:14.298904  407512 main.go:141] libmachine: (addons-910124)   <cpu mode='host-passthrough'>
	I0108 22:52:14.298910  407512 main.go:141] libmachine: (addons-910124)   
	I0108 22:52:14.298915  407512 main.go:141] libmachine: (addons-910124)   </cpu>
	I0108 22:52:14.298925  407512 main.go:141] libmachine: (addons-910124)   <os>
	I0108 22:52:14.298952  407512 main.go:141] libmachine: (addons-910124)     <type>hvm</type>
	I0108 22:52:14.298965  407512 main.go:141] libmachine: (addons-910124)     <boot dev='cdrom'/>
	I0108 22:52:14.298972  407512 main.go:141] libmachine: (addons-910124)     <boot dev='hd'/>
	I0108 22:52:14.298979  407512 main.go:141] libmachine: (addons-910124)     <bootmenu enable='no'/>
	I0108 22:52:14.298986  407512 main.go:141] libmachine: (addons-910124)   </os>
	I0108 22:52:14.298992  407512 main.go:141] libmachine: (addons-910124)   <devices>
	I0108 22:52:14.298999  407512 main.go:141] libmachine: (addons-910124)     <disk type='file' device='cdrom'>
	I0108 22:52:14.299043  407512 main.go:141] libmachine: (addons-910124)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/boot2docker.iso'/>
	I0108 22:52:14.299072  407512 main.go:141] libmachine: (addons-910124)       <target dev='hdc' bus='scsi'/>
	I0108 22:52:14.299085  407512 main.go:141] libmachine: (addons-910124)       <readonly/>
	I0108 22:52:14.299093  407512 main.go:141] libmachine: (addons-910124)     </disk>
	I0108 22:52:14.299105  407512 main.go:141] libmachine: (addons-910124)     <disk type='file' device='disk'>
	I0108 22:52:14.299111  407512 main.go:141] libmachine: (addons-910124)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:52:14.299123  407512 main.go:141] libmachine: (addons-910124)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/addons-910124.rawdisk'/>
	I0108 22:52:14.299131  407512 main.go:141] libmachine: (addons-910124)       <target dev='hda' bus='virtio'/>
	I0108 22:52:14.299141  407512 main.go:141] libmachine: (addons-910124)     </disk>
	I0108 22:52:14.299153  407512 main.go:141] libmachine: (addons-910124)     <interface type='network'>
	I0108 22:52:14.299166  407512 main.go:141] libmachine: (addons-910124)       <source network='mk-addons-910124'/>
	I0108 22:52:14.299180  407512 main.go:141] libmachine: (addons-910124)       <model type='virtio'/>
	I0108 22:52:14.299194  407512 main.go:141] libmachine: (addons-910124)     </interface>
	I0108 22:52:14.299212  407512 main.go:141] libmachine: (addons-910124)     <interface type='network'>
	I0108 22:52:14.299222  407512 main.go:141] libmachine: (addons-910124)       <source network='default'/>
	I0108 22:52:14.299231  407512 main.go:141] libmachine: (addons-910124)       <model type='virtio'/>
	I0108 22:52:14.299239  407512 main.go:141] libmachine: (addons-910124)     </interface>
	I0108 22:52:14.299245  407512 main.go:141] libmachine: (addons-910124)     <serial type='pty'>
	I0108 22:52:14.299253  407512 main.go:141] libmachine: (addons-910124)       <target port='0'/>
	I0108 22:52:14.299259  407512 main.go:141] libmachine: (addons-910124)     </serial>
	I0108 22:52:14.299265  407512 main.go:141] libmachine: (addons-910124)     <console type='pty'>
	I0108 22:52:14.299271  407512 main.go:141] libmachine: (addons-910124)       <target type='serial' port='0'/>
	I0108 22:52:14.299280  407512 main.go:141] libmachine: (addons-910124)     </console>
	I0108 22:52:14.299286  407512 main.go:141] libmachine: (addons-910124)     <rng model='virtio'>
	I0108 22:52:14.299297  407512 main.go:141] libmachine: (addons-910124)       <backend model='random'>/dev/random</backend>
	I0108 22:52:14.299304  407512 main.go:141] libmachine: (addons-910124)     </rng>
	I0108 22:52:14.299310  407512 main.go:141] libmachine: (addons-910124)     
	I0108 22:52:14.299318  407512 main.go:141] libmachine: (addons-910124)     
	I0108 22:52:14.299326  407512 main.go:141] libmachine: (addons-910124)   </devices>
	I0108 22:52:14.299334  407512 main.go:141] libmachine: (addons-910124) </domain>
	I0108 22:52:14.299340  407512 main.go:141] libmachine: (addons-910124) 
	I0108 22:52:14.304243  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:7a:22:34 in network default
	I0108 22:52:14.304927  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:14.304956  407512 main.go:141] libmachine: (addons-910124) Ensuring networks are active...
	I0108 22:52:14.305798  407512 main.go:141] libmachine: (addons-910124) Ensuring network default is active
	I0108 22:52:14.306110  407512 main.go:141] libmachine: (addons-910124) Ensuring network mk-addons-910124 is active
	I0108 22:52:14.306740  407512 main.go:141] libmachine: (addons-910124) Getting domain xml...
	I0108 22:52:14.307605  407512 main.go:141] libmachine: (addons-910124) Creating domain...
	I0108 22:52:15.621820  407512 main.go:141] libmachine: (addons-910124) Waiting to get IP...
	I0108 22:52:15.622779  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:15.623348  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:15.623515  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:15.623409  407534 retry.go:31] will retry after 256.402572ms: waiting for machine to come up
	I0108 22:52:15.882358  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:15.882860  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:15.882921  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:15.882778  407534 retry.go:31] will retry after 252.502976ms: waiting for machine to come up
	I0108 22:52:16.137292  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:16.137802  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:16.137839  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:16.137754  407534 retry.go:31] will retry after 420.002938ms: waiting for machine to come up
	I0108 22:52:16.559696  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:16.560282  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:16.560306  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:16.560235  407534 retry.go:31] will retry after 519.129626ms: waiting for machine to come up
	I0108 22:52:17.081041  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:17.081498  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:17.081544  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:17.081438  407534 retry.go:31] will retry after 549.375377ms: waiting for machine to come up
	I0108 22:52:17.632182  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:17.632635  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:17.632669  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:17.632581  407534 retry.go:31] will retry after 879.065742ms: waiting for machine to come up
	I0108 22:52:18.513659  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:18.514091  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:18.514124  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:18.514029  407534 retry.go:31] will retry after 1.024749708s: waiting for machine to come up
	I0108 22:52:19.540306  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:19.540799  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:19.540827  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:19.540726  407534 retry.go:31] will retry after 1.043170144s: waiting for machine to come up
	I0108 22:52:20.586073  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:20.586468  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:20.586501  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:20.586424  407534 retry.go:31] will retry after 1.66659817s: waiting for machine to come up
	I0108 22:52:22.255467  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:22.255943  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:22.255975  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:22.255894  407534 retry.go:31] will retry after 2.251236752s: waiting for machine to come up
	I0108 22:52:24.508972  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:24.509574  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:24.509674  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:24.509550  407534 retry.go:31] will retry after 2.167195426s: waiting for machine to come up
	I0108 22:52:26.680245  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:26.680801  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:26.680826  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:26.680769  407534 retry.go:31] will retry after 2.992105106s: waiting for machine to come up
	I0108 22:52:29.674597  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:29.675001  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:29.675033  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:29.674943  407534 retry.go:31] will retry after 2.737710522s: waiting for machine to come up
	I0108 22:52:32.416139  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:32.416574  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find current IP address of domain addons-910124 in network mk-addons-910124
	I0108 22:52:32.416602  407512 main.go:141] libmachine: (addons-910124) DBG | I0108 22:52:32.416526  407534 retry.go:31] will retry after 3.984236982s: waiting for machine to come up
	I0108 22:52:36.405098  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.405690  407512 main.go:141] libmachine: (addons-910124) Found IP for machine: 192.168.39.129
	I0108 22:52:36.405722  407512 main.go:141] libmachine: (addons-910124) Reserving static IP address...
	I0108 22:52:36.405742  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has current primary IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.406123  407512 main.go:141] libmachine: (addons-910124) DBG | unable to find host DHCP lease matching {name: "addons-910124", mac: "52:54:00:c1:ef:95", ip: "192.168.39.129"} in network mk-addons-910124
	I0108 22:52:36.497600  407512 main.go:141] libmachine: (addons-910124) DBG | Getting to WaitForSSH function...
	I0108 22:52:36.497634  407512 main.go:141] libmachine: (addons-910124) Reserved static IP address: 192.168.39.129
	I0108 22:52:36.497646  407512 main.go:141] libmachine: (addons-910124) Waiting for SSH to be available...
	I0108 22:52:36.500389  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.500794  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.500823  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.501020  407512 main.go:141] libmachine: (addons-910124) DBG | Using SSH client type: external
	I0108 22:52:36.501045  407512 main.go:141] libmachine: (addons-910124) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa (-rw-------)
	I0108 22:52:36.501138  407512 main.go:141] libmachine: (addons-910124) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:52:36.501168  407512 main.go:141] libmachine: (addons-910124) DBG | About to run SSH command:
	I0108 22:52:36.501207  407512 main.go:141] libmachine: (addons-910124) DBG | exit 0
	I0108 22:52:36.595647  407512 main.go:141] libmachine: (addons-910124) DBG | SSH cmd err, output: <nil>: 
	I0108 22:52:36.595896  407512 main.go:141] libmachine: (addons-910124) KVM machine creation complete!
	I0108 22:52:36.596279  407512 main.go:141] libmachine: (addons-910124) Calling .GetConfigRaw
	I0108 22:52:36.596868  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:36.597059  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:36.597254  407512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 22:52:36.597270  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:52:36.598513  407512 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 22:52:36.598529  407512 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 22:52:36.598535  407512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 22:52:36.598542  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.600742  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.601039  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.601078  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.601202  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.601406  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.601556  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.601735  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.601900  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.602326  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.602345  407512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 22:52:36.723588  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:52:36.723625  407512 main.go:141] libmachine: Detecting the provisioner...
	I0108 22:52:36.723642  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.727345  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.727786  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.727829  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.728015  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.728313  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.728511  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.728690  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.728881  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.729212  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.729237  407512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 22:52:36.852656  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 22:52:36.852847  407512 main.go:141] libmachine: found compatible host: buildroot
	I0108 22:52:36.852866  407512 main.go:141] libmachine: Provisioning with buildroot...
	I0108 22:52:36.852881  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:36.853216  407512 buildroot.go:166] provisioning hostname "addons-910124"
	I0108 22:52:36.853252  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:36.853512  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.856350  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.856840  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.856871  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.857092  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.857307  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.857486  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.857644  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.857885  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.858283  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.858303  407512 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-910124 && echo "addons-910124" | sudo tee /etc/hostname
	I0108 22:52:36.993416  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-910124
	
	I0108 22:52:36.993447  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:36.996414  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.996799  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:36.996828  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:36.996997  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:36.997211  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.997401  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:36.997557  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:36.997729  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:36.998054  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:36.998071  407512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-910124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-910124/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-910124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:52:37.129274  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:52:37.129312  407512 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 22:52:37.129372  407512 buildroot.go:174] setting up certificates
	I0108 22:52:37.129426  407512 provision.go:83] configureAuth start
	I0108 22:52:37.129445  407512 main.go:141] libmachine: (addons-910124) Calling .GetMachineName
	I0108 22:52:37.129770  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:37.132685  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.133007  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.133051  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.133253  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.135245  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.135553  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.135600  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.135670  407512 provision.go:138] copyHostCerts
	I0108 22:52:37.135752  407512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 22:52:37.135896  407512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 22:52:37.135973  407512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 22:52:37.136032  407512 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.addons-910124 san=[192.168.39.129 192.168.39.129 localhost 127.0.0.1 minikube addons-910124]
	I0108 22:52:37.250234  407512 provision.go:172] copyRemoteCerts
	I0108 22:52:37.250309  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:52:37.250364  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.253506  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.253921  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.253954  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.254129  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.254335  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.254483  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.254642  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:37.346000  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 22:52:37.371069  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 22:52:37.398153  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:52:37.423117  407512 provision.go:86] duration metric: configureAuth took 293.672688ms
	I0108 22:52:37.423160  407512 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:52:37.423426  407512 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:52:37.423543  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.426787  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.427150  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.427207  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.427386  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.427660  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.427872  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.428023  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.428230  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:37.428609  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:37.428625  407512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:52:37.783403  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:52:37.783443  407512 main.go:141] libmachine: Checking connection to Docker...
	I0108 22:52:37.783479  407512 main.go:141] libmachine: (addons-910124) Calling .GetURL
	I0108 22:52:37.784951  407512 main.go:141] libmachine: (addons-910124) DBG | Using libvirt version 6000000
	I0108 22:52:37.787481  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.787789  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.787825  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.788004  407512 main.go:141] libmachine: Docker is up and running!
	I0108 22:52:37.788024  407512 main.go:141] libmachine: Reticulating splines...
	I0108 22:52:37.788033  407512 client.go:171] LocalClient.Create took 24.305487314s
	I0108 22:52:37.788078  407512 start.go:167] duration metric: libmachine.API.Create for "addons-910124" took 24.30557735s
	I0108 22:52:37.788139  407512 start.go:300] post-start starting for "addons-910124" (driver="kvm2")
	I0108 22:52:37.788157  407512 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:52:37.788182  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:37.788459  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:52:37.788486  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.790948  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.791563  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.791599  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.791849  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.792137  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.792330  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.792517  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:37.883513  407512 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:52:37.888256  407512 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:52:37.888284  407512 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 22:52:37.888352  407512 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 22:52:37.888373  407512 start.go:303] post-start completed in 100.224684ms
	I0108 22:52:37.888410  407512 main.go:141] libmachine: (addons-910124) Calling .GetConfigRaw
	I0108 22:52:37.889073  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:37.893517  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.894060  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.894096  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.894391  407512 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/config.json ...
	I0108 22:52:37.894644  407512 start.go:128] duration metric: createHost completed in 24.433746515s
	I0108 22:52:37.894710  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:37.897454  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.897831  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:37.897894  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:37.898034  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:37.898285  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.898487  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:37.898634  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:37.898812  407512 main.go:141] libmachine: Using SSH client type: native
	I0108 22:52:37.899137  407512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0108 22:52:37.899149  407512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:52:38.025173  407512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704754358.001128464
	
	I0108 22:52:38.025221  407512 fix.go:206] guest clock: 1704754358.001128464
	I0108 22:52:38.025230  407512 fix.go:219] Guest: 2024-01-08 22:52:38.001128464 +0000 UTC Remote: 2024-01-08 22:52:37.894686839 +0000 UTC m=+24.567921542 (delta=106.441625ms)
	I0108 22:52:38.025254  407512 fix.go:190] guest clock delta is within tolerance: 106.441625ms
	I0108 22:52:38.025259  407512 start.go:83] releasing machines lock for "addons-910124", held for 24.564492803s
	I0108 22:52:38.025282  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.025599  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:38.028385  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.028767  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:38.028790  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.029018  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.029649  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.029829  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:52:38.029931  407512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:52:38.029988  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:38.030120  407512 ssh_runner.go:195] Run: cat /version.json
	I0108 22:52:38.030153  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:52:38.032932  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033177  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033245  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:38.033294  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033433  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:38.033637  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:38.033645  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:38.033667  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:38.033853  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:38.033873  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:52:38.034017  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:52:38.034111  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:38.034199  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:52:38.034333  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:52:38.146774  407512 ssh_runner.go:195] Run: systemctl --version
	I0108 22:52:38.152950  407512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:52:38.319094  407512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:52:38.326823  407512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:52:38.326942  407512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:52:38.344744  407512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:52:38.344780  407512 start.go:475] detecting cgroup driver to use...
	I0108 22:52:38.344944  407512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:52:38.361973  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:52:38.375713  407512 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:52:38.375796  407512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:52:38.389464  407512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:52:38.404281  407512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:52:38.518754  407512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:52:38.651253  407512 docker.go:219] disabling docker service ...
	I0108 22:52:38.651349  407512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:52:38.668109  407512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:52:38.682602  407512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:52:38.802440  407512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:52:38.921494  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:52:38.936934  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:52:38.957465  407512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:52:38.957536  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:38.969247  407512 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:52:38.969324  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:38.981002  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:38.994186  407512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:52:39.005490  407512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:52:39.019297  407512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:52:39.030284  407512 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:52:39.030361  407512 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:52:39.045004  407512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:52:39.057207  407512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:52:39.172047  407512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:52:39.372313  407512 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:52:39.372432  407512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:52:39.378246  407512 start.go:543] Will wait 60s for crictl version
	I0108 22:52:39.378392  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:52:39.383199  407512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:52:39.433246  407512 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 22:52:39.433363  407512 ssh_runner.go:195] Run: crio --version
	I0108 22:52:39.487014  407512 ssh_runner.go:195] Run: crio --version
	I0108 22:52:39.536292  407512 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 22:52:39.538122  407512 main.go:141] libmachine: (addons-910124) Calling .GetIP
	I0108 22:52:39.541261  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:39.541786  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:52:39.541816  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:52:39.542236  407512 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:52:39.547286  407512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:52:39.562583  407512 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:52:39.562681  407512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:52:39.602440  407512 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:52:39.602529  407512 ssh_runner.go:195] Run: which lz4
	I0108 22:52:39.606570  407512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:52:39.611132  407512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:52:39.611181  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 22:52:41.678449  407512 crio.go:444] Took 2.071916 seconds to copy over tarball
	I0108 22:52:41.678588  407512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:52:45.073822  407512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.395188593s)
	I0108 22:52:45.073871  407512 crio.go:451] Took 3.395374 seconds to extract the tarball
	I0108 22:52:45.073885  407512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:52:45.117171  407512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:52:45.192337  407512 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:52:45.192376  407512 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:52:45.192513  407512 ssh_runner.go:195] Run: crio config
	I0108 22:52:45.264304  407512 cni.go:84] Creating CNI manager for ""
	I0108 22:52:45.264334  407512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:52:45.264368  407512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:52:45.264394  407512 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-910124 NodeName:addons-910124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:52:45.264564  407512 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-910124"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:52:45.264666  407512 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-910124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:52:45.264724  407512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:52:45.274425  407512 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:52:45.274521  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:52:45.284725  407512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0108 22:52:45.304137  407512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:52:45.323583  407512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0108 22:52:45.342901  407512 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0108 22:52:45.348297  407512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:52:45.362728  407512 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124 for IP: 192.168.39.129
	I0108 22:52:45.362802  407512 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:45.362957  407512 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 22:52:45.640384  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt ...
	I0108 22:52:45.640423  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt: {Name:mkc36a81852ddb14e4b61d277406a892b4ecb346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:45.640584  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key ...
	I0108 22:52:45.640595  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key: {Name:mk8a7ba93c9846e8f1712fa86d3e3c675b202eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:45.640666  407512 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 22:52:46.043234  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt ...
	I0108 22:52:46.043287  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt: {Name:mk54453d77771f2d907d21fe67e8d2434a1dc168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.043571  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key ...
	I0108 22:52:46.043592  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key: {Name:mk793ca51d4d203d77a080934d71e5dbc35c2281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.043772  407512 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.key
	I0108 22:52:46.043791  407512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt with IP's: []
	I0108 22:52:46.303891  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt ...
	I0108 22:52:46.303927  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: {Name:mkcbcbec60054187cbf205990db887d434f8990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.304156  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.key ...
	I0108 22:52:46.304174  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.key: {Name:mkebc8b4d625ab039fd81d53e4de79d49a3c4cd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.304269  407512 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0
	I0108 22:52:46.304294  407512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0 with IP's: [192.168.39.129 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:52:46.540144  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0 ...
	I0108 22:52:46.540192  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0: {Name:mk9ef56aba2c2d91ae74376a4f92b9791a8c93c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.540444  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0 ...
	I0108 22:52:46.540474  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0: {Name:mkdb35f8ce92d5cc71fea4e0f9d8c11ad40e3417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.540599  407512 certs.go:337] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt.9233f9e0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt
	I0108 22:52:46.540736  407512 certs.go:341] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key.9233f9e0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key
	I0108 22:52:46.540802  407512 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key
	I0108 22:52:46.540824  407512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt with IP's: []
	I0108 22:52:46.628398  407512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt ...
	I0108 22:52:46.628450  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt: {Name:mk73cdae1887d583f3ce444f0567f366b63ce828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.628744  407512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key ...
	I0108 22:52:46.628769  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key: {Name:mk4f6324c9a02eaa2b0d03c93035abfa9c6f9107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:46.629136  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:52:46.629191  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 22:52:46.629218  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:52:46.629251  407512 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 22:52:46.630150  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:52:46.659248  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:52:46.688201  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:52:46.714910  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:52:46.742329  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:52:46.769959  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:52:46.798677  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:52:46.826210  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 22:52:46.851771  407512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:52:46.879161  407512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:52:46.899307  407512 ssh_runner.go:195] Run: openssl version
	I0108 22:52:46.905485  407512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:52:46.918066  407512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:46.924414  407512 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:46.924492  407512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:52:46.931262  407512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:52:46.943314  407512 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:52:46.949343  407512 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:52:46.949463  407512 kubeadm.go:404] StartCluster: {Name:addons-910124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-910124 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:46.949563  407512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:52:46.949629  407512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:52:46.993524  407512 cri.go:89] found id: ""
	I0108 22:52:46.993625  407512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:52:47.004364  407512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:52:47.015390  407512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:52:47.026894  407512 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:52:47.026983  407512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:52:47.254889  407512 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:53:00.929839  407512 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:53:00.929927  407512 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:53:00.930044  407512 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:53:00.930178  407512 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:53:00.930314  407512 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:53:00.930407  407512 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:53:00.932303  407512 out.go:204]   - Generating certificates and keys ...
	I0108 22:53:00.932407  407512 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:53:00.932510  407512 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:53:00.932608  407512 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:53:00.932685  407512 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:53:00.932776  407512 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:53:00.932857  407512 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:53:00.932943  407512 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:53:00.933073  407512 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-910124 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0108 22:53:00.933152  407512 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:53:00.933303  407512 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-910124 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0108 22:53:00.933390  407512 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:53:00.933475  407512 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:53:00.933538  407512 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:53:00.933613  407512 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:53:00.933692  407512 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:53:00.933768  407512 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:53:00.933858  407512 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:53:00.933930  407512 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:53:00.934042  407512 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:53:00.934136  407512 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:53:00.936109  407512 out.go:204]   - Booting up control plane ...
	I0108 22:53:00.936233  407512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:53:00.936354  407512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:53:00.936479  407512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:53:00.936615  407512 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:53:00.936722  407512 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:53:00.936777  407512 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:53:00.936976  407512 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:53:00.937107  407512 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002317 seconds
	I0108 22:53:00.937256  407512 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:53:00.937428  407512 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:53:00.937513  407512 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:53:00.937685  407512 kubeadm.go:322] [mark-control-plane] Marking the node addons-910124 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:53:00.937756  407512 kubeadm.go:322] [bootstrap-token] Using token: ldtf5y.qptqwomvby4plhf0
	I0108 22:53:00.939424  407512 out.go:204]   - Configuring RBAC rules ...
	I0108 22:53:00.939601  407512 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:53:00.939716  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:53:00.939907  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:53:00.940079  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:53:00.940231  407512 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:53:00.940354  407512 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:53:00.940510  407512 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:53:00.940576  407512 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:53:00.940643  407512 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:53:00.940652  407512 kubeadm.go:322] 
	I0108 22:53:00.940733  407512 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:53:00.940749  407512 kubeadm.go:322] 
	I0108 22:53:00.940833  407512 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:53:00.940849  407512 kubeadm.go:322] 
	I0108 22:53:00.940900  407512 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:53:00.940966  407512 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:53:00.941034  407512 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:53:00.941049  407512 kubeadm.go:322] 
	I0108 22:53:00.941135  407512 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:53:00.941143  407512 kubeadm.go:322] 
	I0108 22:53:00.941217  407512 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:53:00.941228  407512 kubeadm.go:322] 
	I0108 22:53:00.941307  407512 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:53:00.941423  407512 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:53:00.941523  407512 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:53:00.941534  407512 kubeadm.go:322] 
	I0108 22:53:00.941640  407512 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:53:00.941762  407512 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:53:00.941782  407512 kubeadm.go:322] 
	I0108 22:53:00.941910  407512 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ldtf5y.qptqwomvby4plhf0 \
	I0108 22:53:00.942038  407512 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0108 22:53:00.942076  407512 kubeadm.go:322] 	--control-plane 
	I0108 22:53:00.942084  407512 kubeadm.go:322] 
	I0108 22:53:00.942191  407512 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:53:00.942201  407512 kubeadm.go:322] 
	I0108 22:53:00.942311  407512 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ldtf5y.qptqwomvby4plhf0 \
	I0108 22:53:00.942468  407512 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 22:53:00.942497  407512 cni.go:84] Creating CNI manager for ""
	I0108 22:53:00.942512  407512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:53:00.945585  407512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:53:00.947132  407512 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:53:00.979204  407512 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:53:01.063579  407512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:53:01.063682  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:01.063740  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=addons-910124 minikube.k8s.io/updated_at=2024_01_08T22_53_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:01.178717  407512 ops.go:34] apiserver oom_adj: -16
	I0108 22:53:01.381342  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:01.881500  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:02.381990  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:02.881422  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:03.381796  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:03.881664  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:04.381429  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:04.881427  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:05.382340  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:05.881912  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:06.381552  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:06.881708  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:07.381568  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:07.881582  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:08.382296  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:08.881465  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:09.381460  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:09.881673  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:10.381804  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:10.882266  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:11.382349  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:11.881423  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:12.381597  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:12.881364  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:13.382270  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:13.881502  407512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:53:14.046599  407512 kubeadm.go:1088] duration metric: took 12.982995697s to wait for elevateKubeSystemPrivileges.
	I0108 22:53:14.046652  407512 kubeadm.go:406] StartCluster complete in 27.097199836s
	I0108 22:53:14.046680  407512 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:53:14.046835  407512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:53:14.047467  407512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:53:14.047768  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:53:14.047875  407512 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 22:53:14.048007  407512 addons.go:69] Setting yakd=true in profile "addons-910124"
	I0108 22:53:14.048049  407512 addons.go:69] Setting metrics-server=true in profile "addons-910124"
	I0108 22:53:14.048071  407512 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-910124"
	I0108 22:53:14.048081  407512 addons.go:69] Setting registry=true in profile "addons-910124"
	I0108 22:53:14.048081  407512 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-910124"
	I0108 22:53:14.048097  407512 addons.go:69] Setting storage-provisioner=true in profile "addons-910124"
	I0108 22:53:14.048101  407512 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:53:14.048115  407512 addons.go:69] Setting volumesnapshots=true in profile "addons-910124"
	I0108 22:53:14.048117  407512 addons.go:237] Setting addon storage-provisioner=true in "addons-910124"
	I0108 22:53:14.048126  407512 addons.go:237] Setting addon volumesnapshots=true in "addons-910124"
	I0108 22:53:14.048146  407512 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-910124"
	I0108 22:53:14.048104  407512 addons.go:237] Setting addon registry=true in "addons-910124"
	I0108 22:53:14.048193  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048193  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048071  407512 addons.go:237] Setting addon metrics-server=true in "addons-910124"
	I0108 22:53:14.048301  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048145  407512 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-910124"
	I0108 22:53:14.048052  407512 addons.go:69] Setting cloud-spanner=true in profile "addons-910124"
	I0108 22:53:14.048465  407512 addons.go:237] Setting addon cloud-spanner=true in "addons-910124"
	I0108 22:53:14.048510  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048741  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048741  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048773  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048777  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048194  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048800  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048801  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048783  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048028  407512 addons.go:69] Setting gcp-auth=true in profile "addons-910124"
	I0108 22:53:14.048860  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.048017  407512 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-910124"
	I0108 22:53:14.048879  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048890  407512 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-910124"
	I0108 22:53:14.048012  407512 addons.go:69] Setting ingress=true in profile "addons-910124"
	I0108 22:53:14.048903  407512 mustload.go:65] Loading cluster: addons-910124
	I0108 22:53:14.048905  407512 addons.go:237] Setting addon ingress=true in "addons-910124"
	I0108 22:53:14.048193  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.048072  407512 addons.go:237] Setting addon yakd=true in "addons-910124"
	I0108 22:53:14.048912  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.048034  407512 addons.go:69] Setting default-storageclass=true in profile "addons-910124"
	I0108 22:53:14.048974  407512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-910124"
	I0108 22:53:14.048036  407512 addons.go:69] Setting helm-tiller=true in profile "addons-910124"
	I0108 22:53:14.048986  407512 addons.go:237] Setting addon helm-tiller=true in "addons-910124"
	I0108 22:53:14.048034  407512 addons.go:69] Setting ingress-dns=true in profile "addons-910124"
	I0108 22:53:14.049013  407512 addons.go:237] Setting addon ingress-dns=true in "addons-910124"
	I0108 22:53:14.048074  407512 addons.go:69] Setting inspektor-gadget=true in profile "addons-910124"
	I0108 22:53:14.049085  407512 addons.go:237] Setting addon inspektor-gadget=true in "addons-910124"
	I0108 22:53:14.049095  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049116  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049139  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049243  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049269  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049432  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049518  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049557  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049667  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049696  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.049703  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049778  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.049779  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.049812  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.050125  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.050200  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.050239  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.050476  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.050496  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.050497  407512 config.go:182] Loaded profile config "addons-910124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:53:14.050887  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.072180  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0108 22:53:14.072408  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0108 22:53:14.072878  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.073424  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.073446  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.073824  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.074485  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.074534  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.074844  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41345
	I0108 22:53:14.075129  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.075414  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.075986  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.076018  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.076246  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.076265  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.076407  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.076657  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.077209  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.077255  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.078365  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0108 22:53:14.079027  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.079065  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.079304  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.080005  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.080073  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.080631  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.081348  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.081374  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.083616  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.083657  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.083805  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.083848  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.083997  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.084036  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.092804  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0108 22:53:14.093521  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.094253  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.094282  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.094362  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0108 22:53:14.095009  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.095790  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.095855  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.099848  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.100694  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.100726  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.101274  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.101967  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.102016  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.103127  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0108 22:53:14.103840  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.104405  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.104432  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.104826  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.105403  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.105446  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.108622  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0108 22:53:14.109166  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.109680  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.109701  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.110072  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.110276  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.112717  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.114755  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 22:53:14.114360  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45813
	I0108 22:53:14.114506  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0108 22:53:14.116040  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 22:53:14.116656  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.117800  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 22:53:14.119180  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 22:53:14.118513  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.118827  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.119018  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0108 22:53:14.120386  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 22:53:14.120471  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.120958  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.121855  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 22:53:14.123241  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 22:53:14.124485  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 22:53:14.125649  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 22:53:14.125668  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 22:53:14.122438  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.125692  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.125714  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.124587  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.125767  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.122857  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.123927  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
	I0108 22:53:14.124725  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0108 22:53:14.126878  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.126946  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.127543  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.127929  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.128004  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.128227  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.128245  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.129171  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.129229  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.129541  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.129614  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.129775  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.129794  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.130221  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.130255  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.130478  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.130517  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.130760  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.130837  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.130859  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.131532  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.131756  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.131910  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.132063  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.133920  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I0108 22:53:14.134951  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.137144  407512 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 22:53:14.138272  407512 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 22:53:14.138293  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 22:53:14.137925  407512 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-910124"
	I0108 22:53:14.138317  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.138356  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.136546  407512 addons.go:237] Setting addon default-storageclass=true in "addons-910124"
	I0108 22:53:14.138403  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.138772  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.138807  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.138841  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.138902  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.137972  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0108 22:53:14.135815  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.140301  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.140977  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.140997  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.141505  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.141737  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.143216  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.143239  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.143311  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.143332  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.143352  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.143399  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.143701  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.143765  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.143897  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.144014  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.144258  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.144301  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.146764  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.148997  407512 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 22:53:14.147454  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0108 22:53:14.147799  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0108 22:53:14.150614  407512 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 22:53:14.150638  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 22:53:14.150669  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.151247  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.151340  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.152046  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.152080  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.152272  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.152291  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.152728  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.152782  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.153138  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.153202  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.154356  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0108 22:53:14.154983  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0108 22:53:14.155614  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.156164  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.156468  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.156582  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0108 22:53:14.156755  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.156998  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.157015  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.157171  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0108 22:53:14.158938  407512 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 22:53:14.157388  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.157442  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.157694  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.157930  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.157975  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.158210  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.160481  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.160767  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.161533  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.161555  407512 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 22:53:14.162552  407512 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 22:53:14.163915  407512 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 22:53:14.163937  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 22:53:14.163962  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.165510  407512 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:53:14.165536  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 22:53:14.165563  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.162625  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.162056  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.165702  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.162143  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.165760  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.161806  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.166842  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.166957  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0108 22:53:14.167044  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.167105  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.167215  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.167810  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.169928  407512 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 22:53:14.168237  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.167821  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.169434  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.169598  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.169778  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46179
	I0108 22:53:14.169822  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.170077  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.170318  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.171035  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.171651  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.171753  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.171785  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.171913  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:53:14.171933  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:53:14.171950  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.171950  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.171995  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.172090  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.172149  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.172292  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.172370  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.172388  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.172581  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.172672  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.173160  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.173181  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.173686  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.173792  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:14.174063  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.174096  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.174211  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.174253  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.174384  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.174395  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.174581  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.174806  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.175058  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.177386  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.178083  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.178148  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.178169  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.178190  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.178199  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0108 22:53:14.180521  407512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:53:14.178721  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.178723  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.179002  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.182156  407512 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:53:14.182179  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:53:14.182208  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.182699  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.182725  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.184901  407512 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 22:53:14.182949  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.183196  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.186497  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 22:53:14.186510  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 22:53:14.186535  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.186944  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.187320  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.187545  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.187597  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.189030  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.189319  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0108 22:53:14.189561  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.189582  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.189763  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.189916  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.190021  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.190039  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.190213  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.190444  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.190469  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.190506  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.190519  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.190536  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.190703  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.190849  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.190966  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.191101  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.191297  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.193233  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.195191  407512 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 22:53:14.196668  407512 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 22:53:14.196698  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 22:53:14.196729  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.199856  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I0108 22:53:14.200263  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.200624  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.200868  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.200891  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.201445  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.201508  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.201525  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.201662  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.201731  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.201868  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.201921  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.202030  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.207780  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0108 22:53:14.208051  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37063
	I0108 22:53:14.208469  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.208636  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.209216  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.209237  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.209243  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.209264  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.209807  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.209859  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0108 22:53:14.210092  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.210114  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.210319  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.210407  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.210896  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.210912  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.211389  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.211641  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.212923  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.213008  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.215261  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 22:53:14.213819  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.218415  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:14.216992  407512 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 22:53:14.219479  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0108 22:53:14.221033  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I0108 22:53:14.221425  407512 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 22:53:14.222006  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.222556  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:14.224113  407512 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:53:14.224132  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 22:53:14.224147  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.224118  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 22:53:14.224186  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 22:53:14.224194  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.223075  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.223256  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.224245  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.222842  407512 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:53:14.224279  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 22:53:14.224286  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.224872  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.225484  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:14.225528  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:14.225770  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.225795  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.227076  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.227315  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:14.228065  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.228364  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.228393  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.228564  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.228700  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.228793  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.228924  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.229149  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.229661  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.229683  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.229851  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.230031  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.230095  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.230147  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.230170  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.230322  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.231807  407512 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 22:53:14.230967  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.233508  407512 out.go:177]   - Using image docker.io/busybox:stable
	W0108 22:53:14.231546  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38346->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.231001  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.233702  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.235288  407512 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:53:14.235304  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 22:53:14.235321  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.235367  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.235387  407512 retry.go:31] will retry after 324.086744ms: ssh: handshake failed: read tcp 192.168.39.1:38346->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.235419  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.235563  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	W0108 22:53:14.236669  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38352->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.236760  407512 retry.go:31] will retry after 156.651489ms: ssh: handshake failed: read tcp 192.168.39.1:38352->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.238734  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.247564  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46181
	I0108 22:53:14.247973  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.247987  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.248039  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.248544  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.248597  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:14.248735  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.249019  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:14.249038  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:14.249041  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:14.249396  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:14.249541  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	W0108 22:53:14.250193  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 22:53:14.250221  407512 retry.go:31] will retry after 345.425047ms: ssh: handshake failed: EOF
	I0108 22:53:14.251275  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:14.251670  407512 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:53:14.251720  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:53:14.251753  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:14.255427  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.255948  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:14.255987  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:14.256258  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:14.256544  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:14.256728  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:14.256896  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	W0108 22:53:14.258304  407512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38372->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.258338  407512 retry.go:31] will retry after 319.615904ms: ssh: handshake failed: read tcp 192.168.39.1:38372->192.168.39.129:22: read: connection reset by peer
	I0108 22:53:14.395533  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 22:53:14.395577  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 22:53:14.415900  407512 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 22:53:14.415940  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 22:53:14.456959  407512 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 22:53:14.457008  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 22:53:14.466527  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:53:14.466547  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 22:53:14.480477  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 22:53:14.486578  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:53:14.492454  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:53:14.536820  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 22:53:14.536870  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 22:53:14.574109  407512 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 22:53:14.574137  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 22:53:14.600649  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:53:14.601795  407512 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 22:53:14.601821  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 22:53:14.641710  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:53:14.658678  407512 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 22:53:14.658712  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 22:53:14.659662  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 22:53:14.659685  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 22:53:14.659775  407512 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:53:14.659793  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 22:53:14.718709  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:53:14.718749  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:53:14.765937  407512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-910124" context rescaled to 1 replicas
	I0108 22:53:14.766006  407512 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:53:14.768025  407512 out.go:177] * Verifying Kubernetes components...
	I0108 22:53:14.769572  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:53:14.837124  407512 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 22:53:14.837212  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 22:53:14.862321  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 22:53:14.862354  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 22:53:14.878287  407512 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 22:53:14.878323  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 22:53:14.960557  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 22:53:14.960587  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 22:53:15.081651  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:53:15.095894  407512 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:53:15.095923  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 22:53:15.145866  407512 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:53:15.145902  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:53:15.163804  407512 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 22:53:15.163832  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 22:53:15.184945  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:53:15.212543  407512 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 22:53:15.212583  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 22:53:15.222343  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 22:53:15.222378  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 22:53:15.226785  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:53:15.247780  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:53:15.271053  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 22:53:15.271094  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 22:53:15.293859  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:53:15.365424  407512 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 22:53:15.365480  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 22:53:15.376247  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:53:15.385984  407512 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 22:53:15.386028  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 22:53:15.440291  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 22:53:15.440320  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 22:53:15.443081  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 22:53:15.443098  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 22:53:15.496655  407512 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:15.496692  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 22:53:15.530325  407512 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 22:53:15.530353  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 22:53:15.573519  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 22:53:15.573551  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 22:53:15.582542  407512 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:53:15.582572  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 22:53:15.610835  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:15.686977  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 22:53:15.687038  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 22:53:15.715637  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:53:15.720203  407512 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:53:15.720234  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 22:53:15.822526  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:53:15.831216  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 22:53:15.831249  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 22:53:15.914183  407512 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:53:15.914216  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 22:53:16.018223  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:53:21.118667  407512 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.632025985s)
	I0108 22:53:21.118742  407512 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 22:53:21.118767  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.626266856s)
	I0108 22:53:21.118863  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.118881  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.118957  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.638428198s)
	I0108 22:53:21.119018  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.119091  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.119311  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:21.119383  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119395  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119408  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.119418  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.119509  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119523  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119536  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:21.119545  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:21.119681  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119696  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119818  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:21.119843  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:21.119881  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.305314  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.704605113s)
	I0108 22:53:22.305410  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.305433  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.305945  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.305970  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.305993  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.306004  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.306386  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.306466  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.306487  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.736952  407512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 22:53:22.736997  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:22.740578  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:22.741075  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:22.741116  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:22.741307  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:22.741579  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:22.741766  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:22.741944  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:22.947905  407512 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.178279802s)
	I0108 22:53:22.947957  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.866261751s)
	I0108 22:53:22.948020  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.306246694s)
	I0108 22:53:22.948050  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948074  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948154  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948181  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948537  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.948582  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.948591  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.948602  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948611  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948661  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.948747  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.948765  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.948797  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:22.948807  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:22.948885  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.948900  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.949378  407512 node_ready.go:35] waiting up to 6m0s for node "addons-910124" to be "Ready" ...
	I0108 22:53:22.950796  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:22.950836  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:22.950847  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:22.950859  407512 addons.go:473] Verifying addon registry=true in "addons-910124"
	I0108 22:53:22.953031  407512 out.go:177] * Verifying registry addon...
	I0108 22:53:22.953861  407512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 22:53:22.955631  407512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 22:53:23.018782  407512 addons.go:237] Setting addon gcp-auth=true in "addons-910124"
	I0108 22:53:23.018877  407512 host.go:66] Checking if "addons-910124" exists ...
	I0108 22:53:23.019452  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:23.019511  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:23.023215  407512 node_ready.go:49] node "addons-910124" has status "Ready":"True"
	I0108 22:53:23.023248  407512 node_ready.go:38] duration metric: took 73.848106ms waiting for node "addons-910124" to be "Ready" ...
	I0108 22:53:23.023262  407512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:53:23.036278  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0108 22:53:23.036907  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:23.037654  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:23.037685  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:23.038171  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:23.038855  407512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 22:53:23.038912  407512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:53:23.056988  407512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0108 22:53:23.057583  407512 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:53:23.058310  407512 main.go:141] libmachine: Using API Version  1
	I0108 22:53:23.058347  407512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:53:23.058849  407512 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:53:23.059170  407512 main.go:141] libmachine: (addons-910124) Calling .GetState
	I0108 22:53:23.061808  407512 main.go:141] libmachine: (addons-910124) Calling .DriverName
	I0108 22:53:23.062238  407512 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 22:53:23.062275  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHHostname
	I0108 22:53:23.066529  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:23.067142  407512 main.go:141] libmachine: (addons-910124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:ef:95", ip: ""} in network mk-addons-910124: {Iface:virbr1 ExpiryTime:2024-01-08 23:52:30 +0000 UTC Type:0 Mac:52:54:00:c1:ef:95 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-910124 Clientid:01:52:54:00:c1:ef:95}
	I0108 22:53:23.067196  407512 main.go:141] libmachine: (addons-910124) DBG | domain addons-910124 has defined IP address 192.168.39.129 and MAC address 52:54:00:c1:ef:95 in network mk-addons-910124
	I0108 22:53:23.067492  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHPort
	I0108 22:53:23.067790  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHKeyPath
	I0108 22:53:23.068066  407512 main.go:141] libmachine: (addons-910124) Calling .GetSSHUsername
	I0108 22:53:23.068265  407512 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/addons-910124/id_rsa Username:docker}
	I0108 22:53:23.143994  407512 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:53:23.144033  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:23.205417  407512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:23.265392  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.080397188s)
	I0108 22:53:23.265485  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:23.265512  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:23.265886  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:23.265948  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:23.265972  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:23.265984  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:23.265994  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:23.266346  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:23.266421  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:23.266442  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:23.485290  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:23.485329  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:23.485777  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:23.485792  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:23.485811  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:23.581118  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:23.983064  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:24.469021  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:24.990529  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:25.014553  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.787715366s)
	I0108 22:53:25.014611  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014626  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014623  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.766798292s)
	I0108 22:53:25.014664  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.720766667s)
	I0108 22:53:25.014705  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014731  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014746  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.6384685s)
	I0108 22:53:25.014758  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014762  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014777  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.014796  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.014993  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.299319509s)
	I0108 22:53:25.015024  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015049  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015054  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015066  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015069  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015080  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015089  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015096  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.192531806s)
	I0108 22:53:25.015121  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015126  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015133  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015164  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015174  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015182  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015191  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015189  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015200  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015242  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015251  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015260  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015268  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015253  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.404030456s)
	W0108 22:53:25.015308  407512 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:53:25.015386  407512 retry.go:31] will retry after 348.822362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:53:25.015388  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015423  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015422  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015429  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015436  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015449  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015461  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.015461  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015472  407512 addons.go:473] Verifying addon ingress=true in "addons-910124"
	I0108 22:53:25.015509  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.019466  407512 out.go:177] * Verifying ingress addon...
	I0108 22:53:25.015474  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.015551  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015451  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.015793  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015828  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.015863  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.015890  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021135  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021160  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021162  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021188  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.021191  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.021165  407512 addons.go:473] Verifying addon metrics-server=true in "addons-910124"
	I0108 22:53:25.021199  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.021463  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021480  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021579  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.021611  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021612  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.021624  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.021628  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.021636  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.023182  407512 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-910124 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 22:53:25.022182  407512 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 22:53:25.048707  407512 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 22:53:25.048738  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:25.084061  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.084110  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.084539  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.084566  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.084582  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.239986  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:25.365439  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:53:25.551112  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:25.587053  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:25.830236  407512 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.767955569s)
	I0108 22:53:25.832481  407512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:53:25.830626  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.812348107s)
	I0108 22:53:25.834198  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.835867  407512 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 22:53:25.834219  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.837788  407512 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 22:53:25.836412  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.836468  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.837861  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 22:53:25.837869  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.838027  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:25.838041  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:25.838414  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:25.838439  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:25.838445  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:25.838460  407512 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-910124"
	I0108 22:53:25.840248  407512 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 22:53:25.842836  407512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 22:53:25.890887  407512 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 22:53:25.890911  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 22:53:25.962178  407512 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:53:25.962218  407512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 22:53:26.039387  407512 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:53:26.039424  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.080390  407512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:53:26.157822  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:26.248348  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:26.427394  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.485825  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:26.560614  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:26.874522  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:26.962296  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:27.046287  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:27.349402  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:27.398898  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:27.547065  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:27.560038  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:27.853299  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:27.989275  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:28.173564  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:28.354219  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:28.471049  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:28.543646  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:28.567296  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.201785045s)
	I0108 22:53:28.567393  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:28.567416  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:28.567733  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:28.567808  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:28.567833  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:28.567850  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:28.567862  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:28.568309  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:28.568351  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:28.568371  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:28.879544  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:28.986465  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:29.071836  407512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.991388015s)
	I0108 22:53:29.071929  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:29.071944  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:29.072382  407512 main.go:141] libmachine: (addons-910124) DBG | Closing plugin on server side
	I0108 22:53:29.072468  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:29.072485  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:29.072504  407512 main.go:141] libmachine: Making call to close driver server
	I0108 22:53:29.072533  407512 main.go:141] libmachine: (addons-910124) Calling .Close
	I0108 22:53:29.073004  407512 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:53:29.073029  407512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:53:29.074433  407512 addons.go:473] Verifying addon gcp-auth=true in "addons-910124"
	I0108 22:53:29.076694  407512 out.go:177] * Verifying gcp-auth addon...
	I0108 22:53:29.079630  407512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 22:53:29.118876  407512 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 22:53:29.118910  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:29.119134  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:29.352776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:29.463248  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:29.532210  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:29.589766  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:29.733537  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:29.858429  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.007618  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:30.045121  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:30.085467  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:30.356223  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.461648  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:30.530761  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:30.587213  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:30.851807  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:30.963282  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:31.030537  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:31.085262  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:31.349415  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:31.462657  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:31.537383  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:31.584324  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:31.857420  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:31.961728  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:32.035139  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:32.085002  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:32.215480  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:32.350450  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:32.467949  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:32.548207  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:32.590574  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:32.861646  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:32.963648  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:33.032209  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:33.082964  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:33.350229  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:33.462614  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:33.530713  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:33.587491  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:33.852254  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:33.964689  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:34.035982  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:34.086305  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:34.217022  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:34.352717  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:34.462427  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:34.531705  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:34.585842  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:34.866227  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:34.967156  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:35.038741  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:35.090684  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:35.349174  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:35.461632  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:35.537152  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:35.587776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:35.856138  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:35.961986  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:36.030347  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:36.084316  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:36.495954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:36.502389  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:36.504260  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:36.534265  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:36.588934  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:36.849346  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:36.964067  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:37.030457  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:37.088305  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:37.354289  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:37.491293  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:37.543721  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:37.605884  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:37.857287  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:37.962267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:38.029470  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:38.084650  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:38.349989  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:38.462114  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:38.530503  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:38.589108  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:38.714538  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:38.850234  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:38.966107  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:39.037336  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:39.086277  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:39.409826  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:39.465135  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:39.548550  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:39.586595  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:39.863458  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:39.965144  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:40.029580  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:40.084201  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:40.366967  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:40.462309  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:40.531406  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:40.583493  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:40.717782  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:40.857432  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:40.963076  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:41.034754  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.084050  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:41.356337  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:41.472073  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:41.532975  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:41.584207  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:41.849515  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:41.965355  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:42.030101  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:42.099155  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:42.360499  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:42.460510  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:42.529527  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:42.593292  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:42.821522  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:42.850988  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:42.960846  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:43.060509  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:43.085113  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:43.351843  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:43.462538  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:43.539900  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:43.593232  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:43.849721  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:43.963014  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:44.043878  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:44.086258  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:44.357724  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:44.460879  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:44.530932  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:44.641297  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:44.853061  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.330678  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:45.331650  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:45.349738  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:45.353810  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:45.362981  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.468199  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:45.530429  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:45.583718  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:45.850036  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:45.961776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:46.030544  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:46.085015  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:46.350003  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:46.462611  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:46.532264  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:46.584019  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:46.850350  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:46.968845  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:47.033895  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:47.091601  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:47.361574  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:47.462375  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:47.549971  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:47.584729  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:47.721879  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:47.849787  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:47.962376  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:48.030258  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:48.083896  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:48.402869  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:48.461775  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:48.536166  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:48.585219  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:48.857678  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:48.963237  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:49.030464  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:49.086899  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:49.349538  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:49.462620  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:49.530439  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:49.584393  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:49.849685  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:49.961525  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:50.071890  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:50.084628  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:50.225246  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:50.353119  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:50.461894  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:50.529991  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:50.585213  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:50.865294  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:50.962276  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:51.030070  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:51.087400  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:51.349314  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:51.462954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:51.530631  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:51.583717  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:51.853394  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:51.961727  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:52.031137  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:52.085394  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:52.350331  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:52.463259  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:52.531065  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:52.584153  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:52.715564  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:52.851339  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:52.961929  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:53.032101  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:53.085734  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:53.350290  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:53.461129  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:53.531108  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:53.584908  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:53.850148  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:53.963202  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:54.032208  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:54.090024  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:54.356360  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:54.461673  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:54.531735  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:54.585208  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:54.849635  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:54.962144  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:55.030374  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:55.084523  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:55.214321  407512 pod_ready.go:102] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:55.350258  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:55.461778  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:55.533264  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:55.584683  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:55.852075  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:55.961387  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:56.030128  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:56.085432  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:56.245063  407512 pod_ready.go:92] pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.245116  407512 pod_ready.go:81] duration metric: took 33.039649283s waiting for pod "coredns-5dd5756b68-nlqgd" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.245138  407512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.264053  407512 pod_ready.go:92] pod "etcd-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.264099  407512 pod_ready.go:81] duration metric: took 18.952105ms waiting for pod "etcd-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.264115  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.275669  407512 pod_ready.go:92] pod "kube-apiserver-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.275712  407512 pod_ready.go:81] duration metric: took 11.586815ms waiting for pod "kube-apiserver-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.275732  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.292749  407512 pod_ready.go:92] pod "kube-controller-manager-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.292785  407512 pod_ready.go:81] duration metric: took 17.043212ms waiting for pod "kube-controller-manager-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.292804  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qzsv5" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.302383  407512 pod_ready.go:92] pod "kube-proxy-qzsv5" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.302414  407512 pod_ready.go:81] duration metric: took 9.601523ms waiting for pod "kube-proxy-qzsv5" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.302426  407512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.352819  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:56.463685  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:56.531641  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:56.584176  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:56.611203  407512 pod_ready.go:92] pod "kube-scheduler-addons-910124" in "kube-system" namespace has status "Ready":"True"
	I0108 22:53:56.611236  407512 pod_ready.go:81] duration metric: took 308.803119ms waiting for pod "kube-scheduler-addons-910124" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.611248  407512 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace to be "Ready" ...
	I0108 22:53:56.849625  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:56.964740  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:57.029271  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:57.087336  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:57.355597  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:57.461277  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:57.530902  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:57.584569  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:57.851952  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:57.962163  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:58.029338  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:58.084855  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:58.353747  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:58.468350  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:58.530348  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:58.586166  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:58.621220  407512 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:53:58.850519  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:58.961060  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:59.030056  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:59.085256  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:59.348655  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:59.461509  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:53:59.530093  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:53:59.586002  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:53:59.849731  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:53:59.963984  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:00.032316  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:00.084798  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:00.351095  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:00.462588  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:00.530347  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:00.584893  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:00.849285  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:00.975288  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:01.032452  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:01.083923  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:01.126803  407512 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:01.350329  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:01.462604  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:01.532692  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:01.585064  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:01.858550  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:01.960949  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:02.030083  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:02.084780  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:02.362315  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:02.471502  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:02.556350  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:02.585467  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:02.636236  407512 pod_ready.go:92] pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:54:02.636292  407512 pod_ready.go:81] duration metric: took 6.025034591s waiting for pod "metrics-server-7c66d45ddc-fspmw" in "kube-system" namespace to be "Ready" ...
	I0108 22:54:02.636311  407512 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace to be "Ready" ...
	I0108 22:54:02.849707  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:02.961865  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:03.030145  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:03.085954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:03.352801  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:03.470121  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:03.530277  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:03.587712  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:03.850508  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:03.964003  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:04.030842  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:04.085693  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:04.509608  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:04.522875  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:04.806866  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:04.807332  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:04.826998  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:04.855790  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:04.962417  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:05.029196  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.084410  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:05.352227  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:05.462511  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:05.533515  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:05.588113  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:05.849670  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:05.962918  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:06.032744  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:06.085678  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:06.350050  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:06.466258  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:06.529942  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:06.585383  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:06.851287  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:06.965282  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:07.030598  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:07.084225  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:07.154719  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:07.350965  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:07.463025  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:07.530226  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:07.584574  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:07.851212  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:07.962682  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:08.032125  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:08.085267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:08.349554  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:08.462700  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:08.530429  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:08.584353  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:08.851397  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:08.965109  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:09.030000  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:09.086543  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:09.349646  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:09.462585  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:09.532690  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:09.584756  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:09.645208  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:09.853340  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:09.964643  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:10.030931  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:10.085246  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:10.696816  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:10.714751  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:10.715002  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:10.716244  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:10.850535  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:10.962827  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:11.031333  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:11.083905  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:11.350213  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:11.464073  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:11.529791  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:11.584657  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:11.849251  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:11.962276  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:12.030861  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:12.083923  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:12.145857  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:12.351092  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:12.463219  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:12.531880  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:12.584618  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:13.206267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.216850  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:13.228573  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:13.234083  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:13.350085  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.466267  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:13.531425  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:13.584169  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:13.849619  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:13.961702  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:14.030388  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:14.084760  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:14.350525  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:14.462423  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:14.530385  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:14.585959  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:14.650294  407512 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"False"
	I0108 22:54:14.849688  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:14.961936  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:15.029179  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:15.085199  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:15.144121  407512 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace has status "Ready":"True"
	I0108 22:54:15.144149  407512 pod_ready.go:81] duration metric: took 12.507829966s waiting for pod "nvidia-device-plugin-daemonset-n8pqg" in "kube-system" namespace to be "Ready" ...
	I0108 22:54:15.144171  407512 pod_ready.go:38] duration metric: took 52.120894643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:54:15.144192  407512 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:54:15.144232  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:54:15.144297  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:54:15.231327  407512 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:15.231369  407512 cri.go:89] found id: ""
	I0108 22:54:15.231380  407512 logs.go:284] 1 containers: [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9]
	I0108 22:54:15.231458  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.256567  407512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:54:15.256677  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:54:15.319314  407512 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:15.319346  407512 cri.go:89] found id: ""
	I0108 22:54:15.319368  407512 logs.go:284] 1 containers: [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842]
	I0108 22:54:15.319428  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.331046  407512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:54:15.331141  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:54:15.351207  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:15.431035  407512 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:15.431075  407512 cri.go:89] found id: ""
	I0108 22:54:15.431085  407512 logs.go:284] 1 containers: [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074]
	I0108 22:54:15.431158  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.442387  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:54:15.442481  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:54:15.462160  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:15.531167  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:15.543237  407512 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:15.543265  407512 cri.go:89] found id: ""
	I0108 22:54:15.543276  407512 logs.go:284] 1 containers: [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e]
	I0108 22:54:15.543338  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.551491  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:54:15.551600  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:54:15.585776  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:15.650081  407512 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:15.650114  407512 cri.go:89] found id: ""
	I0108 22:54:15.650128  407512 logs.go:284] 1 containers: [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae]
	I0108 22:54:15.650214  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.666439  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:54:15.666545  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:54:15.778534  407512 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:15.778562  407512 cri.go:89] found id: ""
	I0108 22:54:15.778571  407512 logs.go:284] 1 containers: [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a]
	I0108 22:54:15.778637  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:15.787703  407512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:54:15.787825  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:54:15.853478  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:15.900606  407512 cri.go:89] found id: ""
	I0108 22:54:15.900639  407512 logs.go:284] 0 containers: []
	W0108 22:54:15.900648  407512 logs.go:286] No container was found matching "kindnet"
	I0108 22:54:15.900662  407512 logs.go:123] Gathering logs for coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] ...
	I0108 22:54:15.900682  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:15.962609  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:15.979124  407512 logs.go:123] Gathering logs for kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] ...
	I0108 22:54:15.979164  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:16.033658  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:16.084479  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:16.103844  407512 logs.go:123] Gathering logs for kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] ...
	I0108 22:54:16.103890  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:16.206314  407512 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:54:16.206366  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:54:16.350708  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:16.462437  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:16.530334  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:16.585798  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:16.633594  407512 logs.go:123] Gathering logs for container status ...
	I0108 22:54:16.633645  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:54:16.735780  407512 logs.go:123] Gathering logs for kubelet ...
	I0108 22:54:16.735813  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 22:54:16.846919  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:16.847105  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:16.852687  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:16.865973  407512 logs.go:123] Gathering logs for dmesg ...
	I0108 22:54:16.866031  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:54:16.905038  407512 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:54:16.905091  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:54:16.966082  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:17.031596  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:17.084532  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:17.166846  407512 logs.go:123] Gathering logs for kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] ...
	I0108 22:54:17.166899  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:17.230940  407512 logs.go:123] Gathering logs for etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] ...
	I0108 22:54:17.231008  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:17.347511  407512 logs.go:123] Gathering logs for kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] ...
	I0108 22:54:17.347558  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:17.349507  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:17.394343  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:17.394410  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:54:17.394549  407512 out.go:239] X Problems detected in kubelet:
	W0108 22:54:17.394569  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:17.394580  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:17.394595  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:17.394608  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:17.461326  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:17.531793  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:17.587809  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:17.853819  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:17.962004  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:18.031900  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:18.088046  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:18.350968  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:18.463846  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:18.529543  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:18.585159  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:18.850853  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:18.963561  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:19.031096  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:19.084807  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:19.358524  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:19.464549  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:19.534277  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:19.587135  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:19.850878  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:19.965920  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:20.049225  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:20.090461  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:20.365097  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:20.489954  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:20.529905  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:20.585561  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:20.851148  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:20.961580  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:21.030377  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:21.085050  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:21.351010  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:21.462119  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:21.530899  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:21.584992  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:21.851183  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:21.964889  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:22.032468  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:22.084383  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:22.349249  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:22.461804  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:22.531272  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:22.585296  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:22.852966  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:22.961583  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:23.030592  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:23.084835  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:23.349533  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:23.461190  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:23.538313  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:23.584299  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:23.851184  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:23.960827  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:24.030250  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:24.089200  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:24.351586  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:24.462858  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:24.531031  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:24.586411  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:24.850554  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:24.962798  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:25.037696  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:25.088196  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:25.362995  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:25.462738  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:25.531973  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:25.585413  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:25.850814  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:25.966180  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:26.029962  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:26.085542  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:26.350557  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:26.462260  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:26.530914  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:26.586631  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:26.850090  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:26.962933  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:27.030114  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:27.083448  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:27.396115  407512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:54:27.453376  407512 api_server.go:72] duration metric: took 1m12.687320115s to wait for apiserver process to appear ...
	I0108 22:54:27.453417  407512 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:54:27.453470  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:54:27.453548  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:54:27.573794  407512 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:27.573829  407512 cri.go:89] found id: ""
	I0108 22:54:27.573852  407512 logs.go:284] 1 containers: [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9]
	I0108 22:54:27.573927  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.597369  407512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:54:27.597466  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:54:27.698918  407512 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:27.698957  407512 cri.go:89] found id: ""
	I0108 22:54:27.698969  407512 logs.go:284] 1 containers: [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842]
	I0108 22:54:27.699072  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.726352  407512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:54:27.726454  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:54:27.751673  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:27.753439  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:27.758031  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:27.758454  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:27.825675  407512 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:27.825700  407512 cri.go:89] found id: ""
	I0108 22:54:27.825711  407512 logs.go:284] 1 containers: [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074]
	I0108 22:54:27.825775  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.834803  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:54:27.834896  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:54:27.853731  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:27.927068  407512 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:27.927112  407512 cri.go:89] found id: ""
	I0108 22:54:27.927126  407512 logs.go:284] 1 containers: [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e]
	I0108 22:54:27.927208  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:27.932105  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:54:27.932177  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:54:27.962872  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:28.031219  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:28.078753  407512 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:28.078786  407512 cri.go:89] found id: ""
	I0108 22:54:28.078796  407512 logs.go:284] 1 containers: [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae]
	I0108 22:54:28.078853  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:28.084792  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:28.091297  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:54:28.091406  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:54:28.172616  407512 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:28.172646  407512 cri.go:89] found id: ""
	I0108 22:54:28.172659  407512 logs.go:284] 1 containers: [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a]
	I0108 22:54:28.172733  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:28.186169  407512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:54:28.186232  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:54:28.289669  407512 cri.go:89] found id: ""
	I0108 22:54:28.289696  407512 logs.go:284] 0 containers: []
	W0108 22:54:28.289705  407512 logs.go:286] No container was found matching "kindnet"
	I0108 22:54:28.289717  407512 logs.go:123] Gathering logs for kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] ...
	I0108 22:54:28.289738  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:28.351822  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:28.391689  407512 logs.go:123] Gathering logs for kubelet ...
	I0108 22:54:28.391739  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:54:28.462480  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0108 22:54:28.480025  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:28.480210  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:28.499203  407512 logs.go:123] Gathering logs for dmesg ...
	I0108 22:54:28.499238  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:54:28.530750  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:28.547794  407512 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:54:28.547837  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:54:28.584331  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:28.849473  407512 logs.go:123] Gathering logs for kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] ...
	I0108 22:54:28.849533  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:28.853795  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:28.950311  407512 logs.go:123] Gathering logs for etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] ...
	I0108 22:54:28.950371  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:28.962622  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:29.031992  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:29.059949  407512 logs.go:123] Gathering logs for coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] ...
	I0108 22:54:29.059986  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:29.085032  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:29.150376  407512 logs.go:123] Gathering logs for kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] ...
	I0108 22:54:29.150421  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:29.247990  407512 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:54:29.248038  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:54:29.351968  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:29.462754  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:29.532279  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:29.584551  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:29.684754  407512 logs.go:123] Gathering logs for container status ...
	I0108 22:54:29.684811  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:54:29.761957  407512 logs.go:123] Gathering logs for kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] ...
	I0108 22:54:29.762007  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:29.860465  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:29.906592  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:29.906659  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:54:29.906773  407512 out.go:239] X Problems detected in kubelet:
	W0108 22:54:29.906793  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:29.906813  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:29.906828  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:29.906837  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:29.961602  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:30.030655  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:30.083825  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:30.351242  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:30.462022  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:30.545969  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:30.592620  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:30.852761  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:30.962402  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:54:31.035727  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:31.109360  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:31.350955  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:31.465000  407512 kapi.go:107] duration metric: took 1m8.509372886s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 22:54:31.530021  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:31.586089  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:31.862191  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:32.030317  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:32.095075  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:32.349427  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:32.532791  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:32.585839  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:32.852207  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:33.030368  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:33.085838  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:33.351051  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:33.531133  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:33.584881  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:33.853562  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:34.030356  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:34.085453  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:34.361940  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:34.532834  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:34.589475  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:34.849816  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:35.051994  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:35.084479  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:35.350381  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:35.530730  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:35.583961  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:35.853821  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:36.030587  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:36.085695  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:36.354518  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:36.532353  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:36.588316  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:36.851485  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:37.030387  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:37.085075  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:37.349858  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:37.533092  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:37.584986  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:37.850366  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:38.030748  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:38.084292  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:38.352160  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:38.531030  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:38.585101  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:38.849843  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:39.031571  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:39.084693  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:39.356883  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:39.530392  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:39.584836  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:39.851245  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:39.908268  407512 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0108 22:54:39.915018  407512 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0108 22:54:39.916444  407512 api_server.go:141] control plane version: v1.28.4
	I0108 22:54:39.916493  407512 api_server.go:131] duration metric: took 12.463065793s to wait for apiserver health ...
	I0108 22:54:39.916504  407512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:54:39.916530  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:54:39.916598  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:54:39.967466  407512 cri.go:89] found id: "c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:39.967497  407512 cri.go:89] found id: ""
	I0108 22:54:39.967507  407512 logs.go:284] 1 containers: [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9]
	I0108 22:54:39.967572  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:39.986014  407512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:54:39.986106  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:54:40.031259  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:40.053801  407512 cri.go:89] found id: "bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:40.053839  407512 cri.go:89] found id: ""
	I0108 22:54:40.053851  407512 logs.go:284] 1 containers: [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842]
	I0108 22:54:40.053915  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.059802  407512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:54:40.059893  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:54:40.085160  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:40.111011  407512 cri.go:89] found id: "7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:40.111055  407512 cri.go:89] found id: ""
	I0108 22:54:40.111071  407512 logs.go:284] 1 containers: [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074]
	I0108 22:54:40.111140  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.116791  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:54:40.116886  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:54:40.167692  407512 cri.go:89] found id: "4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:40.167726  407512 cri.go:89] found id: ""
	I0108 22:54:40.167737  407512 logs.go:284] 1 containers: [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e]
	I0108 22:54:40.167806  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.173863  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:54:40.173963  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:54:40.220957  407512 cri.go:89] found id: "22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:40.220992  407512 cri.go:89] found id: ""
	I0108 22:54:40.221006  407512 logs.go:284] 1 containers: [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae]
	I0108 22:54:40.221076  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.227505  407512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:54:40.227587  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:54:40.286579  407512 cri.go:89] found id: "22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:40.286608  407512 cri.go:89] found id: ""
	I0108 22:54:40.286617  407512 logs.go:284] 1 containers: [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a]
	I0108 22:54:40.286687  407512 ssh_runner.go:195] Run: which crictl
	I0108 22:54:40.296616  407512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:54:40.296707  407512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:54:40.352368  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:40.494811  407512 cri.go:89] found id: ""
	I0108 22:54:40.494849  407512 logs.go:284] 0 containers: []
	W0108 22:54:40.494861  407512 logs.go:286] No container was found matching "kindnet"
	I0108 22:54:40.494875  407512 logs.go:123] Gathering logs for kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] ...
	I0108 22:54:40.494896  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9"
	I0108 22:54:40.533081  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:40.583921  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:40.650324  407512 logs.go:123] Gathering logs for etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] ...
	I0108 22:54:40.650372  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842"
	I0108 22:54:40.806617  407512 logs.go:123] Gathering logs for kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] ...
	I0108 22:54:40.806662  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae"
	I0108 22:54:40.862992  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:40.918840  407512 logs.go:123] Gathering logs for kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] ...
	I0108 22:54:40.918883  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a"
	I0108 22:54:41.038263  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:41.060567  407512 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:54:41.060614  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:54:41.090070  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:41.351166  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:41.533810  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:41.575200  407512 logs.go:123] Gathering logs for container status ...
	I0108 22:54:41.575262  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:54:41.591682  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:41.770685  407512 logs.go:123] Gathering logs for kubelet ...
	I0108 22:54:41.770729  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 22:54:41.854438  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0108 22:54:41.911783  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:41.912008  407512 logs.go:138] Found kubelet problem: Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:41.936509  407512 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:54:41.936565  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:54:42.046925  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:42.099287  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:42.218600  407512 logs.go:123] Gathering logs for kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] ...
	I0108 22:54:42.218644  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e"
	I0108 22:54:42.335092  407512 logs.go:123] Gathering logs for dmesg ...
	I0108 22:54:42.335147  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:54:42.360553  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:42.403165  407512 logs.go:123] Gathering logs for coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] ...
	I0108 22:54:42.403231  407512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074"
	I0108 22:54:42.534984  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:42.544230  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:42.544273  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:54:42.544349  407512 out.go:239] X Problems detected in kubelet:
	W0108 22:54:42.544369  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: W0108 22:53:23.821949    1247 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	W0108 22:54:42.544383  407512 out.go:239]   Jan 08 22:53:23 addons-910124 kubelet[1247]: E0108 22:53:23.821988    1247 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-910124" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-910124' and this object
	I0108 22:54:42.544399  407512 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:42.544409  407512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:42.598094  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:42.858951  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:43.030492  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:43.089635  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:43.350851  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:43.535968  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:43.585800  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:43.855097  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:44.038024  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:44.086132  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:44.359926  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:44.531963  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:44.586107  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:44.850657  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:45.031827  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:45.084636  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:45.349841  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:45.539034  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:45.589869  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:45.850847  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:46.030771  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:46.099243  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:54:46.350494  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:46.546703  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:46.625341  407512 kapi.go:107] duration metric: took 1m17.545710634s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 22:54:46.627584  407512 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-910124 cluster.
	I0108 22:54:46.629889  407512 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 22:54:46.631941  407512 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 22:54:46.860454  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:47.031260  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:47.349873  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:47.530763  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:47.850804  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:48.030866  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:48.350058  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:48.532835  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:48.850027  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:49.031306  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:49.352397  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:49.531446  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:49.849748  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:50.030728  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:50.351033  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:50.532223  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:50.850425  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:51.030718  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:51.349156  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:51.531587  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:51.858715  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:52.031104  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:52.351510  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:52.530860  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:52.557078  407512 system_pods.go:59] 18 kube-system pods found
	I0108 22:54:52.557129  407512 system_pods.go:61] "coredns-5dd5756b68-nlqgd" [f78d5853-fe43-42cb-b283-3cfabf7408f1] Running
	I0108 22:54:52.557137  407512 system_pods.go:61] "csi-hostpath-attacher-0" [a4346e4e-3ea8-445e-b2ad-5ba0bb33583c] Running
	I0108 22:54:52.557145  407512 system_pods.go:61] "csi-hostpath-resizer-0" [9ad15dcb-eb67-4e9c-b2a1-d8e0fdc73bec] Running
	I0108 22:54:52.557158  407512 system_pods.go:61] "csi-hostpathplugin-t58w7" [135b9d3b-3b61-4d16-beba-9b88351a4d5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 22:54:52.557167  407512 system_pods.go:61] "etcd-addons-910124" [a5756142-3dfa-4e20-a8cc-175b2e02fcab] Running
	I0108 22:54:52.557176  407512 system_pods.go:61] "kube-apiserver-addons-910124" [aaf01ae6-4110-4e46-b635-1785e8606696] Running
	I0108 22:54:52.557183  407512 system_pods.go:61] "kube-controller-manager-addons-910124" [8b30f0f7-629e-4ffd-8cd1-978c3a82dede] Running
	I0108 22:54:52.557191  407512 system_pods.go:61] "kube-ingress-dns-minikube" [1421ba58-25cc-45eb-b175-3febdab83a8e] Running
	I0108 22:54:52.557199  407512 system_pods.go:61] "kube-proxy-qzsv5" [5b398884-3550-4727-bf6e-9d10cd7e63ba] Running
	I0108 22:54:52.557217  407512 system_pods.go:61] "kube-scheduler-addons-910124" [ceb95c3e-4ec5-47dd-b38b-ae6fd7b62f1d] Running
	I0108 22:54:52.557224  407512 system_pods.go:61] "metrics-server-7c66d45ddc-fspmw" [e7812f80-df3d-4fc2-8430-9c7246f638f0] Running
	I0108 22:54:52.557234  407512 system_pods.go:61] "nvidia-device-plugin-daemonset-n8pqg" [22231673-96e3-48d4-a97e-9d77a615c63c] Running
	I0108 22:54:52.557242  407512 system_pods.go:61] "registry-5phsw" [886a9630-22c3-4d03-b42f-b2c1186c7c19] Running
	I0108 22:54:52.557252  407512 system_pods.go:61] "registry-proxy-br7js" [770ce618-3a9f-47a5-9070-e7364b2a564a] Running
	I0108 22:54:52.557261  407512 system_pods.go:61] "snapshot-controller-58dbcc7b99-b9rcb" [fe64aff3-259f-4596-bff9-821a4d91caa9] Running
	I0108 22:54:52.557268  407512 system_pods.go:61] "snapshot-controller-58dbcc7b99-db2j5" [a6367514-fb8f-4ce6-995c-3be39edd4eed] Running
	I0108 22:54:52.557275  407512 system_pods.go:61] "storage-provisioner" [c68caaf9-4a8b-49b7-8d56-414aabff20a5] Running
	I0108 22:54:52.557285  407512 system_pods.go:61] "tiller-deploy-7b677967b9-w9l5g" [d00ef7bc-d0f2-4fce-9757-1a825ca34ef8] Running
	I0108 22:54:52.557300  407512 system_pods.go:74] duration metric: took 12.640784835s to wait for pod list to return data ...
	I0108 22:54:52.557316  407512 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:54:52.560140  407512 default_sa.go:45] found service account: "default"
	I0108 22:54:52.560166  407512 default_sa.go:55] duration metric: took 2.839267ms for default service account to be created ...
	I0108 22:54:52.560178  407512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:54:52.576822  407512 system_pods.go:86] 18 kube-system pods found
	I0108 22:54:52.576878  407512 system_pods.go:89] "coredns-5dd5756b68-nlqgd" [f78d5853-fe43-42cb-b283-3cfabf7408f1] Running
	I0108 22:54:52.576888  407512 system_pods.go:89] "csi-hostpath-attacher-0" [a4346e4e-3ea8-445e-b2ad-5ba0bb33583c] Running
	I0108 22:54:52.576896  407512 system_pods.go:89] "csi-hostpath-resizer-0" [9ad15dcb-eb67-4e9c-b2a1-d8e0fdc73bec] Running
	I0108 22:54:52.576907  407512 system_pods.go:89] "csi-hostpathplugin-t58w7" [135b9d3b-3b61-4d16-beba-9b88351a4d5c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0108 22:54:52.576916  407512 system_pods.go:89] "etcd-addons-910124" [a5756142-3dfa-4e20-a8cc-175b2e02fcab] Running
	I0108 22:54:52.576925  407512 system_pods.go:89] "kube-apiserver-addons-910124" [aaf01ae6-4110-4e46-b635-1785e8606696] Running
	I0108 22:54:52.576932  407512 system_pods.go:89] "kube-controller-manager-addons-910124" [8b30f0f7-629e-4ffd-8cd1-978c3a82dede] Running
	I0108 22:54:52.576940  407512 system_pods.go:89] "kube-ingress-dns-minikube" [1421ba58-25cc-45eb-b175-3febdab83a8e] Running
	I0108 22:54:52.576947  407512 system_pods.go:89] "kube-proxy-qzsv5" [5b398884-3550-4727-bf6e-9d10cd7e63ba] Running
	I0108 22:54:52.576953  407512 system_pods.go:89] "kube-scheduler-addons-910124" [ceb95c3e-4ec5-47dd-b38b-ae6fd7b62f1d] Running
	I0108 22:54:52.576960  407512 system_pods.go:89] "metrics-server-7c66d45ddc-fspmw" [e7812f80-df3d-4fc2-8430-9c7246f638f0] Running
	I0108 22:54:52.576967  407512 system_pods.go:89] "nvidia-device-plugin-daemonset-n8pqg" [22231673-96e3-48d4-a97e-9d77a615c63c] Running
	I0108 22:54:52.576975  407512 system_pods.go:89] "registry-5phsw" [886a9630-22c3-4d03-b42f-b2c1186c7c19] Running
	I0108 22:54:52.576986  407512 system_pods.go:89] "registry-proxy-br7js" [770ce618-3a9f-47a5-9070-e7364b2a564a] Running
	I0108 22:54:52.576994  407512 system_pods.go:89] "snapshot-controller-58dbcc7b99-b9rcb" [fe64aff3-259f-4596-bff9-821a4d91caa9] Running
	I0108 22:54:52.577003  407512 system_pods.go:89] "snapshot-controller-58dbcc7b99-db2j5" [a6367514-fb8f-4ce6-995c-3be39edd4eed] Running
	I0108 22:54:52.577016  407512 system_pods.go:89] "storage-provisioner" [c68caaf9-4a8b-49b7-8d56-414aabff20a5] Running
	I0108 22:54:52.577026  407512 system_pods.go:89] "tiller-deploy-7b677967b9-w9l5g" [d00ef7bc-d0f2-4fce-9757-1a825ca34ef8] Running
	I0108 22:54:52.577043  407512 system_pods.go:126] duration metric: took 16.855372ms to wait for k8s-apps to be running ...
	I0108 22:54:52.577058  407512 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:54:52.577135  407512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:54:52.611238  407512 system_svc.go:56] duration metric: took 34.170178ms WaitForService to wait for kubelet.
	I0108 22:54:52.611275  407512 kubeadm.go:581] duration metric: took 1m37.845228616s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:54:52.611303  407512 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:54:52.614675  407512 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:54:52.614709  407512 node_conditions.go:123] node cpu capacity is 2
	I0108 22:54:52.614722  407512 node_conditions.go:105] duration metric: took 3.4125ms to run NodePressure ...
	I0108 22:54:52.614737  407512 start.go:228] waiting for startup goroutines ...
	I0108 22:54:52.850674  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:53.029892  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:53.349195  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:53.531640  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:53.849217  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:54.030335  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:54.355763  407512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:54:54.530249  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:54.858273  407512 kapi.go:107] duration metric: took 1m29.015439976s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 22:54:55.029156  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:55.530993  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:56.030758  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:56.530881  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:57.029281  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:57.531777  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:58.029856  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:58.530504  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:59.030719  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:54:59.531806  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:00.030204  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:00.531344  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:01.044209  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:01.530797  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:02.029480  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:02.531736  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:03.032748  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:03.533457  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:04.034180  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:04.531468  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:05.033996  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:05.530616  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:06.031073  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:06.531594  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:07.035861  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:07.532283  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:08.030125  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:08.531296  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:09.031229  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:09.531454  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:10.030870  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:10.530969  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:11.030914  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:11.529529  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:12.031141  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:12.530036  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:13.030404  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:13.531845  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:14.030636  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:14.535296  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:15.030542  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:15.531518  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:16.031226  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:16.530327  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:17.030177  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:17.530774  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:18.029547  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:18.530181  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:19.030288  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:19.531709  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:20.030508  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:20.531741  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:21.030578  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:21.530604  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:22.031179  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:22.529319  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:23.030021  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:23.530488  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:24.032685  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:24.529917  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:25.029980  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:25.529858  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:26.030726  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:26.531325  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:27.030956  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:27.529875  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:28.030165  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:28.530213  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:29.029862  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:29.530241  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:30.030062  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:30.529966  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:31.033626  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:31.531064  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:32.030102  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:32.530007  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:33.030906  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:33.533644  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:34.039131  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:34.531549  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:35.037737  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:35.530214  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:36.030102  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:36.531754  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:37.033179  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:37.529988  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:38.030378  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:38.530346  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:39.030514  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:39.529984  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:40.030420  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:40.530241  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:41.030883  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:41.532388  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:42.032023  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:42.530097  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:43.032262  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:43.530524  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:44.030854  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:44.531105  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:45.032862  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:45.534775  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:46.031402  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:46.534273  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:47.030758  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:47.530871  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:48.036485  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:48.532251  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:49.032267  407512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:55:49.530237  407512 kapi.go:107] duration metric: took 2m24.508042334s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 22:55:49.532237  407512 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0108 22:55:49.533766  407512 addons.go:508] enable addons completed in 2m35.485893061s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0108 22:55:49.533813  407512 start.go:233] waiting for cluster config update ...
	I0108 22:55:49.533848  407512 start.go:242] writing updated cluster config ...
	I0108 22:55:49.534192  407512 ssh_runner.go:195] Run: rm -f paused
	I0108 22:55:49.594206  407512 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:55:49.596027  407512 out.go:177] * Done! kubectl is now configured to use "addons-910124" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 22:52:27 UTC, ends at Mon 2024-01-08 22:59:03 UTC. --
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.596254491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704754743596235000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=f22782a5-1f17-4c57-a842-c783d88a8dd0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.597419136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d2ea3db7-e0c9-4312-80ba-4568b625ee5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.597495694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d2ea3db7-e0c9-4312-80ba-4568b625ee5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.597870624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e17041b291373e7761a7f4243b2501191d7678c40e889e4e76453674b91bcf1,PodSandboxId:af7111e60555ea96a1fc326e671c705e4d1f02b159efdaab8a3f3d18a3789c11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704754735525197077,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-57kht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a30c05c-2acc-44a4-ac75-a83dc8fd428a,},Annotations:map[string]string{io.kubernetes.container.hash: eb5a2f5f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fc2e79ec2203ea40653d7e55595cc975084915585c35e4181c2b9f0486090,PodSandboxId:173829eb9c77769f06d3d20bf73716dd47ec11658b3673677f88c09cd4bba297,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704754596188310598,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b36765f8-4cc3-464b-a1d8-aac9847d8391,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b9cfeff,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe927a0bc1560553e22aa755a5a97997dcfebf57df4fcb9ad928ce7e711fe413,PodSandboxId:9d966426c3f9a8e74091612c0c2d9266cef236b56c9b0fb6f981ff4eb2615bbc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704754574337556661,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-sfj86,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c595560c-8e3d-4723-8840-ad6fe139c985,},Annotations:map[string]string{io.kubernetes.container.hash: 7de1d685,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfbb2b586824656bf4ba646082f62f652c2f690cf25fdca17d0736897f19dc34,PodSandboxId:961bce878f46dca22aaa2d6f89e98257b02521d47a7076b4d0e0ce76d5aadf9b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704754486250218772,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-4wmbs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e71427ad-d27d-46a3-9de7-ebb6b117a0af,},Annotations:map[string]string{io.kubernetes.container.hash: adcf61c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d29b343574ca3793bbb556bb7b113a687c7d722fc7667edc7d56da773f7796c,PodSandboxId:4ade749b95a0466d1db9675292f9ce17b052f6c27a6618200bfebefd3d3ea9e9,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523b
f5,State:CONTAINER_RUNNING,CreatedAt:1704754470402570106,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-br7js,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ce618-3a9f-47a5-9070-e7364b2a564a,},Annotations:map[string]string{io.kubernetes.container.hash: ce5a1bbd,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d335b3cfb0835b4edc1ee00b4ca8778961b740f6d90d988ac28e635ea65ece19,PodSandboxId:03b58b75139d9cdc2d98acaa4b1e6bbdbb2e967c9872b825e0a8f2f3c1578629,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,Annotations:map[s
tring]string{},},ImageRef:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,State:CONTAINER_RUNNING,CreatedAt:1704754443248285462,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-5phsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886a9630-22c3-4d03-b42f-b2c1186c7c19,},Annotations:map[string]string{io.kubernetes.container.hash: d038f329,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465,PodSandboxId:a7736af30bf7630abd0019ac04ee65207b1273a0be72a1c029718261f65905a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704754417581289479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c68caaf9-4a8b-49b7-8d56-414aabff20a5,},Annotations:map[string]string{io.kubernetes.container.hash: a6db657c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c5290454df47f080e78d601346f14c5fc9e28b1a34bd7ced142e2c13f451a0,PodSandboxId:173b22b3f6cbc657f9425e5c84b7f45b27c67ce10ccf9dd2de41b1f666a7fb27,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e
15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704754417499833705,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-d5pgh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8df1e3cb-5981-4ca0-8178-2b3f4ef883db,},Annotations:map[string]string{io.kubernetes.container.hash: cfaf1fdf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae,PodSandboxId:488589f8ed1e4559b1299e36c63d7205877a657d9c7b431243025259ce339a3b,Metadata:&ContainerMetadata
{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704754404961428563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b398884-3550-4727-bf6e-9d10cd7e63ba,},Annotations:map[string]string{io.kubernetes.container.hash: 55bb5e79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074,PodSandboxId:b39d6d9e87fade91c4d974f622545b04492d5a89a0d489a5629b96ff8bb1cf88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704754397343329787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nlqgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78d5853-fe43-42cb-b283-3cfabf7408f1,},Annotations:map[string]string{io.kubernetes.container.hash: 90a12478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e23cb3
4099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e,PodSandboxId:f57afd6d60ec199e7063beb2b8051e39cc0fcb07c5e970f4ca56a5aa91abba70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704754372741453134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9513a9abfc4bec220ed857875c9d44,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ebaab17be2d1658d6363
822826cf13ff672594ba08a4eab65a1faa2395939a,PodSandboxId:c6fb9122e1d1304d69b0d61a57b3104e55425b0846394d289c4484ac2b974363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704754372626515249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12296721335dc694685986b99e962f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b
ef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842,PodSandboxId:dec7eaaf111e19534259afd1431dd50ca1c114c743d6da40120744d3fdf67bb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704754372598213243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c8e792aa9f76cb090c06a2a4f81415,},Annotations:map[string]string{io.kubernetes.container.hash: 5b8f5917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a77
60a02ba9,PodSandboxId:e2f8b537c3e136eb7ec6ac892e273577f2512da5c22d7d5842a9cfff2f7f14df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704754372267750926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3a287bd85c417eb3c4253cb1a5b935,},Annotations:map[string]string{io.kubernetes.container.hash: cc3be73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d2ea3db7-e0c9-4312-80ba-4568b625ee5d na
me=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.651309798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c2f50b60-a33f-4570-bfa8-f0e42a9e0b5b name=/runtime.v1.RuntimeService/Version
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.651411693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c2f50b60-a33f-4570-bfa8-f0e42a9e0b5b name=/runtime.v1.RuntimeService/Version
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.654786934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1b6e81e0-9eed-4e49-ab3f-771082d6856f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.656251584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704754743656226784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=1b6e81e0-9eed-4e49-ab3f-771082d6856f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.659229184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e53caaeb-9975-4804-87a8-41a72c9f24d6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.662210711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e53caaeb-9975-4804-87a8-41a72c9f24d6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.663709202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e17041b291373e7761a7f4243b2501191d7678c40e889e4e76453674b91bcf1,PodSandboxId:af7111e60555ea96a1fc326e671c705e4d1f02b159efdaab8a3f3d18a3789c11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704754735525197077,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-57kht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a30c05c-2acc-44a4-ac75-a83dc8fd428a,},Annotations:map[string]string{io.kubernetes.container.hash: eb5a2f5f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fc2e79ec2203ea40653d7e55595cc975084915585c35e4181c2b9f0486090,PodSandboxId:173829eb9c77769f06d3d20bf73716dd47ec11658b3673677f88c09cd4bba297,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704754596188310598,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b36765f8-4cc3-464b-a1d8-aac9847d8391,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b9cfeff,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe927a0bc1560553e22aa755a5a97997dcfebf57df4fcb9ad928ce7e711fe413,PodSandboxId:9d966426c3f9a8e74091612c0c2d9266cef236b56c9b0fb6f981ff4eb2615bbc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704754574337556661,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-sfj86,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c595560c-8e3d-4723-8840-ad6fe139c985,},Annotations:map[string]string{io.kubernetes.container.hash: 7de1d685,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfbb2b586824656bf4ba646082f62f652c2f690cf25fdca17d0736897f19dc34,PodSandboxId:961bce878f46dca22aaa2d6f89e98257b02521d47a7076b4d0e0ce76d5aadf9b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704754486250218772,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-4wmbs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e71427ad-d27d-46a3-9de7-ebb6b117a0af,},Annotations:map[string]string{io.kubernetes.container.hash: adcf61c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d29b343574ca3793bbb556bb7b113a687c7d722fc7667edc7d56da773f7796c,PodSandboxId:4ade749b95a0466d1db9675292f9ce17b052f6c27a6618200bfebefd3d3ea9e9,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523b
f5,State:CONTAINER_RUNNING,CreatedAt:1704754470402570106,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-br7js,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ce618-3a9f-47a5-9070-e7364b2a564a,},Annotations:map[string]string{io.kubernetes.container.hash: ce5a1bbd,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d335b3cfb0835b4edc1ee00b4ca8778961b740f6d90d988ac28e635ea65ece19,PodSandboxId:03b58b75139d9cdc2d98acaa4b1e6bbdbb2e967c9872b825e0a8f2f3c1578629,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,Annotations:map[s
tring]string{},},ImageRef:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,State:CONTAINER_RUNNING,CreatedAt:1704754443248285462,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-5phsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886a9630-22c3-4d03-b42f-b2c1186c7c19,},Annotations:map[string]string{io.kubernetes.container.hash: d038f329,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465,PodSandboxId:a7736af30bf7630abd0019ac04ee65207b1273a0be72a1c029718261f65905a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704754417581289479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c68caaf9-4a8b-49b7-8d56-414aabff20a5,},Annotations:map[string]string{io.kubernetes.container.hash: a6db657c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c5290454df47f080e78d601346f14c5fc9e28b1a34bd7ced142e2c13f451a0,PodSandboxId:173b22b3f6cbc657f9425e5c84b7f45b27c67ce10ccf9dd2de41b1f666a7fb27,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e
15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704754417499833705,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-d5pgh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8df1e3cb-5981-4ca0-8178-2b3f4ef883db,},Annotations:map[string]string{io.kubernetes.container.hash: cfaf1fdf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae,PodSandboxId:488589f8ed1e4559b1299e36c63d7205877a657d9c7b431243025259ce339a3b,Metadata:&ContainerMetadata
{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704754404961428563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b398884-3550-4727-bf6e-9d10cd7e63ba,},Annotations:map[string]string{io.kubernetes.container.hash: 55bb5e79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074,PodSandboxId:b39d6d9e87fade91c4d974f622545b04492d5a89a0d489a5629b96ff8bb1cf88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704754397343329787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nlqgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78d5853-fe43-42cb-b283-3cfabf7408f1,},Annotations:map[string]string{io.kubernetes.container.hash: 90a12478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e23cb3
4099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e,PodSandboxId:f57afd6d60ec199e7063beb2b8051e39cc0fcb07c5e970f4ca56a5aa91abba70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704754372741453134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9513a9abfc4bec220ed857875c9d44,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ebaab17be2d1658d6363
822826cf13ff672594ba08a4eab65a1faa2395939a,PodSandboxId:c6fb9122e1d1304d69b0d61a57b3104e55425b0846394d289c4484ac2b974363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704754372626515249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12296721335dc694685986b99e962f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b
ef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842,PodSandboxId:dec7eaaf111e19534259afd1431dd50ca1c114c743d6da40120744d3fdf67bb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704754372598213243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c8e792aa9f76cb090c06a2a4f81415,},Annotations:map[string]string{io.kubernetes.container.hash: 5b8f5917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a77
60a02ba9,PodSandboxId:e2f8b537c3e136eb7ec6ac892e273577f2512da5c22d7d5842a9cfff2f7f14df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704754372267750926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3a287bd85c417eb3c4253cb1a5b935,},Annotations:map[string]string{io.kubernetes.container.hash: cc3be73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e53caaeb-9975-4804-87a8-41a72c9f24d6 na
me=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.707042559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=325dd88c-d0f0-4549-8ff8-e1163f8764b3 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.707103066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=325dd88c-d0f0-4549-8ff8-e1163f8764b3 name=/runtime.v1.RuntimeService/Version
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.709126300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dfa01782-8f08-419e-8794-293cb5c2b166 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.710604678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704754743710576005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=dfa01782-8f08-419e-8794-293cb5c2b166 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.711565140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=87ca3b60-edae-42ea-a4e4-8a3b21c13adf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.711651725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=87ca3b60-edae-42ea-a4e4-8a3b21c13adf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.712063108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e17041b291373e7761a7f4243b2501191d7678c40e889e4e76453674b91bcf1,PodSandboxId:af7111e60555ea96a1fc326e671c705e4d1f02b159efdaab8a3f3d18a3789c11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704754735525197077,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-57kht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a30c05c-2acc-44a4-ac75-a83dc8fd428a,},Annotations:map[string]string{io.kubernetes.container.hash: eb5a2f5f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fc2e79ec2203ea40653d7e55595cc975084915585c35e4181c2b9f0486090,PodSandboxId:173829eb9c77769f06d3d20bf73716dd47ec11658b3673677f88c09cd4bba297,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704754596188310598,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b36765f8-4cc3-464b-a1d8-aac9847d8391,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b9cfeff,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe927a0bc1560553e22aa755a5a97997dcfebf57df4fcb9ad928ce7e711fe413,PodSandboxId:9d966426c3f9a8e74091612c0c2d9266cef236b56c9b0fb6f981ff4eb2615bbc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704754574337556661,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-sfj86,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c595560c-8e3d-4723-8840-ad6fe139c985,},Annotations:map[string]string{io.kubernetes.container.hash: 7de1d685,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfbb2b586824656bf4ba646082f62f652c2f690cf25fdca17d0736897f19dc34,PodSandboxId:961bce878f46dca22aaa2d6f89e98257b02521d47a7076b4d0e0ce76d5aadf9b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704754486250218772,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-4wmbs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e71427ad-d27d-46a3-9de7-ebb6b117a0af,},Annotations:map[string]string{io.kubernetes.container.hash: adcf61c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d29b343574ca3793bbb556bb7b113a687c7d722fc7667edc7d56da773f7796c,PodSandboxId:4ade749b95a0466d1db9675292f9ce17b052f6c27a6618200bfebefd3d3ea9e9,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523b
f5,State:CONTAINER_RUNNING,CreatedAt:1704754470402570106,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-br7js,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ce618-3a9f-47a5-9070-e7364b2a564a,},Annotations:map[string]string{io.kubernetes.container.hash: ce5a1bbd,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d335b3cfb0835b4edc1ee00b4ca8778961b740f6d90d988ac28e635ea65ece19,PodSandboxId:03b58b75139d9cdc2d98acaa4b1e6bbdbb2e967c9872b825e0a8f2f3c1578629,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,Annotations:map[s
tring]string{},},ImageRef:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,State:CONTAINER_RUNNING,CreatedAt:1704754443248285462,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-5phsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886a9630-22c3-4d03-b42f-b2c1186c7c19,},Annotations:map[string]string{io.kubernetes.container.hash: d038f329,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465,PodSandboxId:a7736af30bf7630abd0019ac04ee65207b1273a0be72a1c029718261f65905a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704754417581289479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c68caaf9-4a8b-49b7-8d56-414aabff20a5,},Annotations:map[string]string{io.kubernetes.container.hash: a6db657c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c5290454df47f080e78d601346f14c5fc9e28b1a34bd7ced142e2c13f451a0,PodSandboxId:173b22b3f6cbc657f9425e5c84b7f45b27c67ce10ccf9dd2de41b1f666a7fb27,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e
15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704754417499833705,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-d5pgh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8df1e3cb-5981-4ca0-8178-2b3f4ef883db,},Annotations:map[string]string{io.kubernetes.container.hash: cfaf1fdf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae,PodSandboxId:488589f8ed1e4559b1299e36c63d7205877a657d9c7b431243025259ce339a3b,Metadata:&ContainerMetadata
{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704754404961428563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b398884-3550-4727-bf6e-9d10cd7e63ba,},Annotations:map[string]string{io.kubernetes.container.hash: 55bb5e79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074,PodSandboxId:b39d6d9e87fade91c4d974f622545b04492d5a89a0d489a5629b96ff8bb1cf88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704754397343329787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nlqgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78d5853-fe43-42cb-b283-3cfabf7408f1,},Annotations:map[string]string{io.kubernetes.container.hash: 90a12478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e23cb3
4099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e,PodSandboxId:f57afd6d60ec199e7063beb2b8051e39cc0fcb07c5e970f4ca56a5aa91abba70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704754372741453134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9513a9abfc4bec220ed857875c9d44,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ebaab17be2d1658d6363
822826cf13ff672594ba08a4eab65a1faa2395939a,PodSandboxId:c6fb9122e1d1304d69b0d61a57b3104e55425b0846394d289c4484ac2b974363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704754372626515249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12296721335dc694685986b99e962f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b
ef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842,PodSandboxId:dec7eaaf111e19534259afd1431dd50ca1c114c743d6da40120744d3fdf67bb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704754372598213243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c8e792aa9f76cb090c06a2a4f81415,},Annotations:map[string]string{io.kubernetes.container.hash: 5b8f5917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a77
60a02ba9,PodSandboxId:e2f8b537c3e136eb7ec6ac892e273577f2512da5c22d7d5842a9cfff2f7f14df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704754372267750926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3a287bd85c417eb3c4253cb1a5b935,},Annotations:map[string]string{io.kubernetes.container.hash: cc3be73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=87ca3b60-edae-42ea-a4e4-8a3b21c13adf na
me=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.758713038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f872f01e-001a-49dc-8f4f-0f120d49dc3f name=/runtime.v1.RuntimeService/Version
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.758810223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f872f01e-001a-49dc-8f4f-0f120d49dc3f name=/runtime.v1.RuntimeService/Version
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.760500459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=33145d35-ac4c-49fc-8013-1d168e7b3c7a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.762061341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704754743762041938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=33145d35-ac4c-49fc-8013-1d168e7b3c7a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.762763390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2cf0af20-feda-4a64-9b3c-5b2bde4f7cc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.762850490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2cf0af20-feda-4a64-9b3c-5b2bde4f7cc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 22:59:03 addons-910124 crio[712]: time="2024-01-08 22:59:03.763308660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e17041b291373e7761a7f4243b2501191d7678c40e889e4e76453674b91bcf1,PodSandboxId:af7111e60555ea96a1fc326e671c705e4d1f02b159efdaab8a3f3d18a3789c11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704754735525197077,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-57kht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a30c05c-2acc-44a4-ac75-a83dc8fd428a,},Annotations:map[string]string{io.kubernetes.container.hash: eb5a2f5f,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fc2e79ec2203ea40653d7e55595cc975084915585c35e4181c2b9f0486090,PodSandboxId:173829eb9c77769f06d3d20bf73716dd47ec11658b3673677f88c09cd4bba297,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704754596188310598,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b36765f8-4cc3-464b-a1d8-aac9847d8391,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b9cfeff,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe927a0bc1560553e22aa755a5a97997dcfebf57df4fcb9ad928ce7e711fe413,PodSandboxId:9d966426c3f9a8e74091612c0c2d9266cef236b56c9b0fb6f981ff4eb2615bbc,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704754574337556661,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-sfj86,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: c595560c-8e3d-4723-8840-ad6fe139c985,},Annotations:map[string]string{io.kubernetes.container.hash: 7de1d685,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfbb2b586824656bf4ba646082f62f652c2f690cf25fdca17d0736897f19dc34,PodSandboxId:961bce878f46dca22aaa2d6f89e98257b02521d47a7076b4d0e0ce76d5aadf9b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704754486250218772,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-4wmbs,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: e71427ad-d27d-46a3-9de7-ebb6b117a0af,},Annotations:map[string]string{io.kubernetes.container.hash: adcf61c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d29b343574ca3793bbb556bb7b113a687c7d722fc7667edc7d56da773f7796c,PodSandboxId:4ade749b95a0466d1db9675292f9ce17b052f6c27a6618200bfebefd3d3ea9e9,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523b
f5,State:CONTAINER_RUNNING,CreatedAt:1704754470402570106,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-br7js,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 770ce618-3a9f-47a5-9070-e7364b2a564a,},Annotations:map[string]string{io.kubernetes.container.hash: ce5a1bbd,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d335b3cfb0835b4edc1ee00b4ca8778961b740f6d90d988ac28e635ea65ece19,PodSandboxId:03b58b75139d9cdc2d98acaa4b1e6bbdbb2e967c9872b825e0a8f2f3c1578629,Metadata:&ContainerMetadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,Annotations:map[s
tring]string{},},ImageRef:docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86,State:CONTAINER_RUNNING,CreatedAt:1704754443248285462,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-5phsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886a9630-22c3-4d03-b42f-b2c1186c7c19,},Annotations:map[string]string{io.kubernetes.container.hash: d038f329,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465,PodSandboxId:a7736af30bf7630abd0019ac04ee65207b1273a0be72a1c029718261f65905a2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704754417581289479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c68caaf9-4a8b-49b7-8d56-414aabff20a5,},Annotations:map[string]string{io.kubernetes.container.hash: a6db657c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5c5290454df47f080e78d601346f14c5fc9e28b1a34bd7ced142e2c13f451a0,PodSandboxId:173b22b3f6cbc657f9425e5c84b7f45b27c67ce10ccf9dd2de41b1f666a7fb27,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e
15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704754417499833705,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-d5pgh,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8df1e3cb-5981-4ca0-8178-2b3f4ef883db,},Annotations:map[string]string{io.kubernetes.container.hash: cfaf1fdf,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae,PodSandboxId:488589f8ed1e4559b1299e36c63d7205877a657d9c7b431243025259ce339a3b,Metadata:&ContainerMetadata
{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704754404961428563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b398884-3550-4727-bf6e-9d10cd7e63ba,},Annotations:map[string]string{io.kubernetes.container.hash: 55bb5e79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074,PodSandboxId:b39d6d9e87fade91c4d974f622545b04492d5a89a0d489a5629b96ff8bb1cf88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&Im
ageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704754397343329787,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nlqgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f78d5853-fe43-42cb-b283-3cfabf7408f1,},Annotations:map[string]string{io.kubernetes.container.hash: 90a12478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e23cb3
4099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e,PodSandboxId:f57afd6d60ec199e7063beb2b8051e39cc0fcb07c5e970f4ca56a5aa91abba70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704754372741453134,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9513a9abfc4bec220ed857875c9d44,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ebaab17be2d1658d6363
822826cf13ff672594ba08a4eab65a1faa2395939a,PodSandboxId:c6fb9122e1d1304d69b0d61a57b3104e55425b0846394d289c4484ac2b974363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704754372626515249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12296721335dc694685986b99e962f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b
ef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842,PodSandboxId:dec7eaaf111e19534259afd1431dd50ca1c114c743d6da40120744d3fdf67bb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704754372598213243,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c8e792aa9f76cb090c06a2a4f81415,},Annotations:map[string]string{io.kubernetes.container.hash: 5b8f5917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a77
60a02ba9,PodSandboxId:e2f8b537c3e136eb7ec6ac892e273577f2512da5c22d7d5842a9cfff2f7f14df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704754372267750926,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-910124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3a287bd85c417eb3c4253cb1a5b935,},Annotations:map[string]string{io.kubernetes.container.hash: cc3be73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2cf0af20-feda-4a64-9b3c-5b2bde4f7cc4 na
me=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e17041b29137       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7           8 seconds ago       Running             hello-world-app           0                   af7111e60555e       hello-world-app-5d77478584-57kht
	0b6fc2e79ec22       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                   2 minutes ago       Running             nginx                     0                   173829eb9c777       nginx
	fe927a0bc1560       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67             2 minutes ago       Running             headlamp                  0                   9d966426c3f9a       headlamp-7ddfbb94ff-sfj86
	cfbb2b5868246       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06      4 minutes ago       Running             gcp-auth                  0                   961bce878f46d       gcp-auth-d4c87556c-4wmbs
	4d29b343574ca       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5   4 minutes ago       Running             registry-proxy            0                   4ade749b95a04       registry-proxy-br7js
	d335b3cfb0835       docker.io/library/registry@sha256:0a182cb82c93939407967d6d71d6caf11dcef0e5689c6afe2d60518e3b34ab86                5 minutes ago       Running             registry                  0                   03b58b75139d9       registry-5phsw
	f7f1cc8b30161       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                  5 minutes ago       Running             storage-provisioner       0                   a7736af30bf76       storage-provisioner
	e5c5290454df4       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                   5 minutes ago       Running             yakd                      0                   173b22b3f6cbc       yakd-dashboard-9947fc6bf-d5pgh
	22ca3f1305931       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                  5 minutes ago       Running             kube-proxy                0                   488589f8ed1e4       kube-proxy-qzsv5
	7c50c880fc226       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                  5 minutes ago       Running             coredns                   0                   b39d6d9e87fad       coredns-5dd5756b68-nlqgd
	4e23cb34099d4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                  6 minutes ago       Running             kube-scheduler            0                   f57afd6d60ec1       kube-scheduler-addons-910124
	22ebaab17be2d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                  6 minutes ago       Running             kube-controller-manager   0                   c6fb9122e1d13       kube-controller-manager-addons-910124
	bef86635ce9a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                  6 minutes ago       Running             etcd                      0                   dec7eaaf111e1       etcd-addons-910124
	c0f1ac0ede0f8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                  6 minutes ago       Running             kube-apiserver            0                   e2f8b537c3e13       kube-apiserver-addons-910124
	
	
	==> coredns [7c50c880fc22624a0e30ed5ef20fc5d48941e9200c86a390a6b61fb4448ad074] <==
	[INFO] 10.244.0.8:36667 - 34584 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113742s
	[INFO] 10.244.0.8:57641 - 3133 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090329s
	[INFO] 10.244.0.8:57641 - 36403 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110642s
	[INFO] 10.244.0.8:45904 - 18040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059028s
	[INFO] 10.244.0.8:45904 - 48710 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070364s
	[INFO] 10.244.0.8:59578 - 34745 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094784s
	[INFO] 10.244.0.8:59578 - 11702 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144724s
	[INFO] 10.244.0.8:47995 - 13856 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001411604s
	[INFO] 10.244.0.8:47995 - 12583 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.003668703s
	[INFO] 10.244.0.8:37296 - 25870 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060587s
	[INFO] 10.244.0.8:37296 - 48906 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000215654s
	[INFO] 10.244.0.8:34377 - 4944 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065893s
	[INFO] 10.244.0.8:34377 - 42847 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000494295s
	[INFO] 10.244.0.8:59558 - 6025 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000296096s
	[INFO] 10.244.0.8:59558 - 23439 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00035732s
	[INFO] 10.244.0.20:47528 - 9674 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000307155s
	[INFO] 10.244.0.20:33784 - 12481 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000130813s
	[INFO] 10.244.0.20:56930 - 26736 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000398441s
	[INFO] 10.244.0.20:50939 - 53688 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000372579s
	[INFO] 10.244.0.20:34587 - 4671 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000194431s
	[INFO] 10.244.0.20:51885 - 29875 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00021741s
	[INFO] 10.244.0.20:57307 - 36853 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000700935s
	[INFO] 10.244.0.20:56559 - 26685 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0009464s
	[INFO] 10.244.0.23:40299 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001810547s
	[INFO] 10.244.0.23:41785 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000369584s
	
	
	==> describe nodes <==
	Name:               addons-910124
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-910124
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=addons-910124
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_53_01_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-910124
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:52:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-910124
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:58:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:57:06 +0000   Mon, 08 Jan 2024 22:52:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:57:06 +0000   Mon, 08 Jan 2024 22:52:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:57:06 +0000   Mon, 08 Jan 2024 22:52:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:57:06 +0000   Mon, 08 Jan 2024 22:53:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-910124
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf8c958003dc42c1a0fb654d3ae6456a
	  System UUID:                cf8c9580-03dc-42c1-a0fb-654d3ae6456a
	  Boot ID:                    effe1175-b51f-4c81-986a-8be7ea71e2c1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-57kht         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-4wmbs                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  headlamp                    headlamp-7ddfbb94ff-sfj86                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 coredns-5dd5756b68-nlqgd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m50s
	  kube-system                 etcd-addons-910124                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-apiserver-addons-910124             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-addons-910124    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-qzsv5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-scheduler-addons-910124             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 registry-5phsw                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 registry-proxy-br7js                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-d5pgh           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m28s  kube-proxy       
	  Normal  Starting                 6m4s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s   kubelet          Node addons-910124 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s   kubelet          Node addons-910124 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s   kubelet          Node addons-910124 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m3s   kubelet          Node addons-910124 status is now: NodeReady
	  Normal  RegisteredNode           5m51s  node-controller  Node addons-910124 event: Registered Node addons-910124 in Controller
	
	
	==> dmesg <==
	[  +5.142462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.404427] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.122483] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.164268] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.119928] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.248676] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +11.211568] systemd-fstab-generator[904]: Ignoring "noauto" for root device
	[ +10.302806] systemd-fstab-generator[1240]: Ignoring "noauto" for root device
	[Jan 8 22:53] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.809775] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.281880] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.733317] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 22:54] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.068029] kauditd_printk_skb: 24 callbacks suppressed
	[Jan 8 22:55] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.907431] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 22:56] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.923573] kauditd_printk_skb: 19 callbacks suppressed
	[ +19.506863] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.994463] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.104613] kauditd_printk_skb: 4 callbacks suppressed
	[Jan 8 22:57] kauditd_printk_skb: 10 callbacks suppressed
	[Jan 8 22:58] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [bef86635ce9a17990933f4e03cca12854ce07b8b768a5a624010cb0efb6fe842] <==
	{"level":"warn","ts":"2024-01-08T22:54:27.742428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.281807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"warn","ts":"2024-01-08T22:54:27.742464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.9388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14030"}
	{"level":"info","ts":"2024-01-08T22:54:27.749504Z","caller":"traceutil/trace.go:171","msg":"trace[803257668] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:996; }","duration":"143.624285ms","start":"2024-01-08T22:54:27.605861Z","end":"2024-01-08T22:54:27.749485Z","steps":["trace[803257668] 'agreement among raft nodes before linearized reading'  (duration: 136.519239ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:27.749661Z","caller":"traceutil/trace.go:171","msg":"trace[135481271] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:996; }","duration":"170.506332ms","start":"2024-01-08T22:54:27.579143Z","end":"2024-01-08T22:54:27.749649Z","steps":["trace[135481271] 'agreement among raft nodes before linearized reading'  (duration: 163.26193ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:27.749727Z","caller":"traceutil/trace.go:171","msg":"trace[1488137472] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:996; }","duration":"225.19982ms","start":"2024-01-08T22:54:27.524521Z","end":"2024-01-08T22:54:27.749721Z","steps":["trace[1488137472] 'agreement among raft nodes before linearized reading'  (duration: 217.913133ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:54:37.957082Z","caller":"traceutil/trace.go:171","msg":"trace[1291385949] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"104.593351ms","start":"2024-01-08T22:54:37.852473Z","end":"2024-01-08T22:54:37.957067Z","steps":["trace[1291385949] 'process raft request'  (duration: 104.330985ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:03.947215Z","caller":"traceutil/trace.go:171","msg":"trace[1634347057] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"186.372781ms","start":"2024-01-08T22:55:03.760232Z","end":"2024-01-08T22:55:03.946605Z","steps":["trace[1634347057] 'process raft request'  (duration: 185.914295ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:03.951136Z","caller":"traceutil/trace.go:171","msg":"trace[274724205] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"143.044638ms","start":"2024-01-08T22:55:03.808073Z","end":"2024-01-08T22:55:03.951117Z","steps":["trace[274724205] 'process raft request'  (duration: 142.963483ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:03.951196Z","caller":"traceutil/trace.go:171","msg":"trace[389919165] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"177.431888ms","start":"2024-01-08T22:55:03.773743Z","end":"2024-01-08T22:55:03.951175Z","steps":["trace[389919165] 'process raft request'  (duration: 177.065685ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:55:45.856727Z","caller":"traceutil/trace.go:171","msg":"trace[476128016] transaction","detail":"{read_only:false; response_revision:1264; number_of_response:1; }","duration":"229.432044ms","start":"2024-01-08T22:55:45.62716Z","end":"2024-01-08T22:55:45.856592Z","steps":["trace[476128016] 'process raft request'  (duration: 229.052484ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:56:14.137105Z","caller":"traceutil/trace.go:171","msg":"trace[406506714] transaction","detail":"{read_only:false; response_revision:1449; number_of_response:1; }","duration":"444.916049ms","start":"2024-01-08T22:56:13.692154Z","end":"2024-01-08T22:56:14.13707Z","steps":["trace[406506714] 'process raft request'  (duration: 444.231552ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:56:14.137509Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:56:13.692134Z","time spent":"445.164164ms","remote":"127.0.0.1:35814","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-my7cotauepkwlklxxhp7ca2kbu\" mod_revision:1365 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-my7cotauepkwlklxxhp7ca2kbu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-my7cotauepkwlklxxhp7ca2kbu\" > >"}
	{"level":"info","ts":"2024-01-08T22:56:19.149866Z","caller":"traceutil/trace.go:171","msg":"trace[1615008308] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"343.178868ms","start":"2024-01-08T22:56:18.80665Z","end":"2024-01-08T22:56:19.149829Z","steps":["trace[1615008308] 'process raft request'  (duration: 342.706976ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:56:19.150399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:56:18.806633Z","time spent":"343.595478ms","remote":"127.0.0.1:35792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1481 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-08T22:56:35.721264Z","caller":"traceutil/trace.go:171","msg":"trace[36539039] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"273.843069ms","start":"2024-01-08T22:56:35.447386Z","end":"2024-01-08T22:56:35.721229Z","steps":["trace[36539039] 'process raft request'  (duration: 273.636169ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:56:35.722438Z","caller":"traceutil/trace.go:171","msg":"trace[1050098612] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1700; }","duration":"271.181682ms","start":"2024-01-08T22:56:35.451242Z","end":"2024-01-08T22:56:35.722423Z","steps":["trace[1050098612] 'read index received'  (duration: 271.177942ms)","trace[1050098612] 'applied index is now lower than readState.Index'  (duration: 3.234µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:56:35.722652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.425211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-01-08T22:56:35.72269Z","caller":"traceutil/trace.go:171","msg":"trace[177043735] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1633; }","duration":"271.484745ms","start":"2024-01-08T22:56:35.451197Z","end":"2024-01-08T22:56:35.722681Z","steps":["trace[177043735] 'agreement among raft nodes before linearized reading'  (duration: 271.350316ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:56:35.937643Z","caller":"traceutil/trace.go:171","msg":"trace[1096809342] linearizableReadLoop","detail":"{readStateIndex:1701; appliedIndex:1700; }","duration":"215.105238ms","start":"2024-01-08T22:56:35.722525Z","end":"2024-01-08T22:56:35.93763Z","steps":["trace[1096809342] 'read index received'  (duration: 214.037253ms)","trace[1096809342] 'applied index is now lower than readState.Index'  (duration: 1.067416ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:56:35.937738Z","caller":"traceutil/trace.go:171","msg":"trace[1907229178] transaction","detail":"{read_only:false; response_revision:1634; number_of_response:1; }","duration":"261.88986ms","start":"2024-01-08T22:56:35.675831Z","end":"2024-01-08T22:56:35.937721Z","steps":["trace[1907229178] 'process raft request'  (duration: 260.780281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:56:35.938667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"414.612988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:56:35.938726Z","caller":"traceutil/trace.go:171","msg":"trace[1767864842] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1634; }","duration":"414.693718ms","start":"2024-01-08T22:56:35.524023Z","end":"2024-01-08T22:56:35.938717Z","steps":["trace[1767864842] 'agreement among raft nodes before linearized reading'  (duration: 414.599101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:56:35.93875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:56:35.524008Z","time spent":"414.734939ms","remote":"127.0.0.1:35824","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-01-08T22:56:35.941683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.057173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:500"}
	{"level":"info","ts":"2024-01-08T22:56:35.941757Z","caller":"traceutil/trace.go:171","msg":"trace[374502857] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1634; }","duration":"284.258788ms","start":"2024-01-08T22:56:35.657488Z","end":"2024-01-08T22:56:35.941747Z","steps":["trace[374502857] 'agreement among raft nodes before linearized reading'  (duration: 280.301527ms)"],"step_count":1}
	
	
	==> gcp-auth [cfbb2b586824656bf4ba646082f62f652c2f690cf25fdca17d0736897f19dc34] <==
	2024/01/08 22:54:46 GCP Auth Webhook started!
	2024/01/08 22:55:55 Ready to marshal response ...
	2024/01/08 22:55:55 Ready to write response ...
	2024/01/08 22:55:56 Ready to marshal response ...
	2024/01/08 22:55:56 Ready to write response ...
	2024/01/08 22:55:59 Ready to marshal response ...
	2024/01/08 22:55:59 Ready to write response ...
	2024/01/08 22:56:05 Ready to marshal response ...
	2024/01/08 22:56:05 Ready to write response ...
	2024/01/08 22:56:05 Ready to marshal response ...
	2024/01/08 22:56:05 Ready to write response ...
	2024/01/08 22:56:05 Ready to marshal response ...
	2024/01/08 22:56:05 Ready to write response ...
	2024/01/08 22:56:08 Ready to marshal response ...
	2024/01/08 22:56:08 Ready to write response ...
	2024/01/08 22:56:11 Ready to marshal response ...
	2024/01/08 22:56:11 Ready to write response ...
	2024/01/08 22:56:27 Ready to marshal response ...
	2024/01/08 22:56:27 Ready to write response ...
	2024/01/08 22:56:30 Ready to marshal response ...
	2024/01/08 22:56:30 Ready to write response ...
	2024/01/08 22:56:46 Ready to marshal response ...
	2024/01/08 22:56:46 Ready to write response ...
	2024/01/08 22:58:52 Ready to marshal response ...
	2024/01/08 22:58:52 Ready to write response ...
	
	
	==> kernel <==
	 22:59:04 up 6 min,  0 users,  load average: 0.80, 2.26, 1.35
	Linux addons-910124 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c0f1ac0ede0f89bd1e8d49b691f0f789bc4679baeab7b00b1fdc0a7760a02ba9] <==
	I0108 22:56:28.176980       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0108 22:56:29.883960       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0108 22:56:30.115085       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.218.7"}
	I0108 22:57:03.520644       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0108 22:57:04.267506       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.267654       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.287585       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.287718       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.311266       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.312299       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.317101       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.317193       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.418127       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.418271       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.426041       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.426130       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.473119       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.473199       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:57:04.504262       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:57:04.504328       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 22:57:05.317356       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 22:57:05.505869       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 22:57:05.526772       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 22:58:52.712699       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.66.212"}
	E0108 22:58:55.788673       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [22ebaab17be2d1658d6363822826cf13ff672594ba08a4eab65a1faa2395939a] <==
	W0108 22:57:42.008090       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:57:42.008201       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:58:12.006389       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:58:12.006495       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:58:12.019460       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:58:12.019528       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:58:22.614494       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:58:22.614719       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:58:27.135385       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:58:27.135458       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:58:52.431783       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 22:58:52.481642       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-57kht"
	I0108 22:58:52.515105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.713636ms"
	I0108 22:58:52.529688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.554106ms"
	I0108 22:58:52.532352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="1.456702ms"
	I0108 22:58:52.536184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="134.316µs"
	W0108 22:58:53.985763       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:58:53.985835       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:58:55.606442       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 22:58:55.636291       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 22:58:55.644628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="15.612µs"
	I0108 22:58:56.123377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.618482ms"
	I0108 22:58:56.123670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="143.809µs"
	W0108 22:59:03.482204       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:59:03.482268       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [22ca3f1305931db22cc0305b4951ff664d7fae713a942166644a0694ec73ecae] <==
	I0108 22:53:34.021779       1 server_others.go:69] "Using iptables proxy"
	I0108 22:53:34.185089       1 node.go:141] Successfully retrieved node IP: 192.168.39.129
	I0108 22:53:34.978246       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:53:34.978300       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:53:35.075215       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:53:35.075326       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:53:35.075574       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:53:35.075624       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:53:35.138842       1 config.go:188] "Starting service config controller"
	I0108 22:53:35.149179       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:53:35.149247       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:53:35.149267       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:53:35.178571       1 config.go:315] "Starting node config controller"
	I0108 22:53:35.178619       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:53:35.255993       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:53:35.256128       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:53:35.281030       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4e23cb34099d49ae89b760cf7d16c14877ce6e83981985cc1241069baeae681e] <==
	W0108 22:52:58.034169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:52:58.034306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:52:58.052296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:52:58.052394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:52:58.126022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:52:58.126086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:52:58.131868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:52:58.131990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:52:58.147529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:52:58.147634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:52:58.193292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:52:58.193417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:52:58.227354       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:52:58.227700       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:52:58.227606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:52:58.227829       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 22:52:58.274706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:58.274771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:58.423303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:52:58.423397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:52:58.518200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:52:58.518306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:52:58.562190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:52:58.562242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 22:52:59.875800       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:52:27 UTC, ends at Mon 2024-01-08 22:59:04 UTC. --
	Jan 08 22:58:54 addons-910124 kubelet[1247]: I0108 22:58:54.106036    1247 scope.go:117] "RemoveContainer" containerID="41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5"
	Jan 08 22:58:54 addons-910124 kubelet[1247]: E0108 22:58:54.106773    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5\": container with ID starting with 41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5 not found: ID does not exist" containerID="41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5"
	Jan 08 22:58:54 addons-910124 kubelet[1247]: I0108 22:58:54.106845    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5"} err="failed to get container status \"41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5\": rpc error: code = NotFound desc = could not find container \"41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5\": container with ID starting with 41b3661efb781f05683562f827ad30abd91d83c8c131f2daa3f06a035632ffa5 not found: ID does not exist"
	Jan 08 22:58:54 addons-910124 kubelet[1247]: I0108 22:58:54.109731    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1421ba58-25cc-45eb-b175-3febdab83a8e-kube-api-access-6fsxp" (OuterVolumeSpecName: "kube-api-access-6fsxp") pod "1421ba58-25cc-45eb-b175-3febdab83a8e" (UID: "1421ba58-25cc-45eb-b175-3febdab83a8e"). InnerVolumeSpecName "kube-api-access-6fsxp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:58:54 addons-910124 kubelet[1247]: I0108 22:58:54.202996    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6fsxp\" (UniqueName: \"kubernetes.io/projected/1421ba58-25cc-45eb-b175-3febdab83a8e-kube-api-access-6fsxp\") on node \"addons-910124\" DevicePath \"\""
	Jan 08 22:58:55 addons-910124 kubelet[1247]: I0108 22:58:55.052314    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1421ba58-25cc-45eb-b175-3febdab83a8e" path="/var/lib/kubelet/pods/1421ba58-25cc-45eb-b175-3febdab83a8e/volumes"
	Jan 08 22:58:57 addons-910124 kubelet[1247]: I0108 22:58:57.053561    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6d768c07-8c3b-4cfe-8396-5ef3ce9254e3" path="/var/lib/kubelet/pods/6d768c07-8c3b-4cfe-8396-5ef3ce9254e3/volumes"
	Jan 08 22:58:57 addons-910124 kubelet[1247]: I0108 22:58:57.054277    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e6cc6434-3350-4f77-81c8-b323beb8d885" path="/var/lib/kubelet/pods/e6cc6434-3350-4f77-81c8-b323beb8d885/volumes"
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.044292    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db8kc\" (UniqueName: \"kubernetes.io/projected/0e3eadbf-d882-4751-84f1-43d0f065558c-kube-api-access-db8kc\") pod \"0e3eadbf-d882-4751-84f1-43d0f065558c\" (UID: \"0e3eadbf-d882-4751-84f1-43d0f065558c\") "
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.044582    1247 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3eadbf-d882-4751-84f1-43d0f065558c-webhook-cert\") pod \"0e3eadbf-d882-4751-84f1-43d0f065558c\" (UID: \"0e3eadbf-d882-4751-84f1-43d0f065558c\") "
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.056493    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3eadbf-d882-4751-84f1-43d0f065558c-kube-api-access-db8kc" (OuterVolumeSpecName: "kube-api-access-db8kc") pod "0e3eadbf-d882-4751-84f1-43d0f065558c" (UID: "0e3eadbf-d882-4751-84f1-43d0f065558c"). InnerVolumeSpecName "kube-api-access-db8kc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.056782    1247 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3eadbf-d882-4751-84f1-43d0f065558c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0e3eadbf-d882-4751-84f1-43d0f065558c" (UID: "0e3eadbf-d882-4751-84f1-43d0f065558c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.120175    1247 scope.go:117] "RemoveContainer" containerID="c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892"
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.147110    1247 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-db8kc\" (UniqueName: \"kubernetes.io/projected/0e3eadbf-d882-4751-84f1-43d0f065558c-kube-api-access-db8kc\") on node \"addons-910124\" DevicePath \"\""
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.147141    1247 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3eadbf-d882-4751-84f1-43d0f065558c-webhook-cert\") on node \"addons-910124\" DevicePath \"\""
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.168057    1247 scope.go:117] "RemoveContainer" containerID="c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892"
	Jan 08 22:58:59 addons-910124 kubelet[1247]: E0108 22:58:59.169134    1247 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892\": container with ID starting with c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892 not found: ID does not exist" containerID="c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892"
	Jan 08 22:58:59 addons-910124 kubelet[1247]: I0108 22:58:59.169232    1247 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892"} err="failed to get container status \"c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892\": rpc error: code = NotFound desc = could not find container \"c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892\": container with ID starting with c6f74bba0afcab6ae8c1b4b9a1b26344bcfe8cdde9630542cfa33eb14baaa892 not found: ID does not exist"
	Jan 08 22:59:01 addons-910124 kubelet[1247]: I0108 22:59:01.053165    1247 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0e3eadbf-d882-4751-84f1-43d0f065558c" path="/var/lib/kubelet/pods/0e3eadbf-d882-4751-84f1-43d0f065558c/volumes"
	Jan 08 22:59:01 addons-910124 kubelet[1247]: E0108 22:59:01.177753    1247 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 22:59:01 addons-910124 kubelet[1247]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 22:59:01 addons-910124 kubelet[1247]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 22:59:01 addons-910124 kubelet[1247]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 22:59:01 addons-910124 kubelet[1247]: I0108 22:59:01.864279    1247 scope.go:117] "RemoveContainer" containerID="49a8e6a6de2d12f281956f883dac55b4a18da2eb657a9594baacb74c33266c0f"
	Jan 08 22:59:01 addons-910124 kubelet[1247]: I0108 22:59:01.905379    1247 scope.go:117] "RemoveContainer" containerID="c7e55da588631b91ab73cd3b8645cfbec5106d9fc75d4cc3f8ef1a8fc0c24569"
	
	
	==> storage-provisioner [f7f1cc8b301617d14068bf0d6fcdfadf7a3c8ccda5311f651eec5a6cc7d8d465] <==
	I0108 22:53:39.547454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:53:39.666517       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:53:39.666631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:53:39.679366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:53:39.679657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-910124_79e8042e-46a0-425c-adf1-34bf527648cb!
	I0108 22:53:39.832324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8b7bb6d-9254-4075-ad69-e63df017e5f5", APIVersion:"v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-910124_79e8042e-46a0-425c-adf1-34bf527648cb became leader
	I0108 22:53:40.083764       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-910124_79e8042e-46a0-425c-adf1-34bf527648cb!
	E0108 22:56:31.143636       1 controller.go:1050] claim "838993e9-8f14-4313-9980-01e166fc3d0f" in work queue no longer exists
	E0108 22:56:56.328092       1 controller.go:1050] claim "90c3a7f1-f501-4598-a742-8504262250ca" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-910124 -n addons-910124
helpers_test.go:261: (dbg) Run:  kubectl --context addons-910124 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-910124
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-910124: exit status 82 (2m1.588481838s)

                                                
                                                
-- stdout --
	* Stopping node "addons-910124"  ...
	* Stopping node "addons-910124"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-910124" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910124
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-910124: exit status 11 (21.557498247s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-910124" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910124
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-910124: exit status 11 (6.143716402s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-910124" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-910124
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-910124: exit status 11 (6.14334041s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-910124" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-132808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-132808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.974676223s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-132808 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-132808 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a359bd4b-8452-4dd0-914f-38e7a1fc5591] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a359bd4b-8452-4dd0-914f-38e7a1fc5591] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.004237611s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 23:10:49.610697  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:11:13.676272  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:13.681610  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:13.691888  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:13.712193  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:13.752486  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:13.832854  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:13.993341  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:14.313950  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:14.955003  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:16.235572  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:17.297386  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:11:18.796214  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:23.917341  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:11:34.157921  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-132808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.508626008s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-132808 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.117
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons disable ingress-dns --alsologtostderr -v=1
E0108 23:11:54.638634  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons disable ingress-dns --alsologtostderr -v=1: (13.905196108s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons disable ingress --alsologtostderr -v=1: (7.606183331s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-132808 -n ingress-addon-legacy-132808
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-132808 logs -n 25: (1.170333568s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:06 UTC | 08 Jan 24 23:06 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:06 UTC | 08 Jan 24 23:06 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:06 UTC | 08 Jan 24 23:06 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-483810 image ls                                                | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	| image          | functional-483810 image save                                              | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-483810                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810 image rm                                                | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-483810                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810 image ls                                                | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	| image          | functional-483810 image load                                              | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810 image ls                                                | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	| image          | functional-483810 image save --daemon                                     | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-483810                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-483810 ssh pgrep                                               | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810 image build -t                                          | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | localhost/my-image:functional-483810                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810                                                         | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-483810 image ls                                                | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	| delete         | -p functional-483810                                                      | functional-483810           | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:07 UTC |
	| start          | -p ingress-addon-legacy-132808                                            | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:07 UTC | 08 Jan 24 23:09 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-132808                                               | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:09 UTC | 08 Jan 24 23:09 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-132808                                               | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:09 UTC | 08 Jan 24 23:09 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-132808                                               | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:09 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-132808 ip                                            | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:11 UTC | 08 Jan 24 23:11 UTC |
	| addons         | ingress-addon-legacy-132808                                               | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:11 UTC | 08 Jan 24 23:12 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-132808                                               | ingress-addon-legacy-132808 | jenkins | v1.32.0 | 08 Jan 24 23:12 UTC | 08 Jan 24 23:12 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 23:07:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 23:07:12.319666  415913 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:07:12.319974  415913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:07:12.319983  415913 out.go:309] Setting ErrFile to fd 2...
	I0108 23:07:12.319988  415913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:07:12.320233  415913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:07:12.320966  415913 out.go:303] Setting JSON to false
	I0108 23:07:12.322081  415913 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13758,"bootTime":1704741474,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:07:12.322179  415913 start.go:138] virtualization: kvm guest
	I0108 23:07:12.325229  415913 out.go:177] * [ingress-addon-legacy-132808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:07:12.327371  415913 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:07:12.328774  415913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:07:12.327477  415913 notify.go:220] Checking for updates...
	I0108 23:07:12.330455  415913 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:07:12.331940  415913 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:07:12.333308  415913 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:07:12.334764  415913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:07:12.336437  415913 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:07:12.376587  415913 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 23:07:12.378090  415913 start.go:298] selected driver: kvm2
	I0108 23:07:12.378123  415913 start.go:902] validating driver "kvm2" against <nil>
	I0108 23:07:12.378145  415913 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:07:12.378929  415913 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:07:12.379066  415913 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 23:07:12.396189  415913 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 23:07:12.396272  415913 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 23:07:12.396614  415913 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 23:07:12.396698  415913 cni.go:84] Creating CNI manager for ""
	I0108 23:07:12.396714  415913 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 23:07:12.396726  415913 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 23:07:12.396740  415913 start_flags.go:323] config:
	{Name:ingress-addon-legacy-132808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-132808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:07:12.396910  415913 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:07:12.399012  415913 out.go:177] * Starting control plane node ingress-addon-legacy-132808 in cluster ingress-addon-legacy-132808
	I0108 23:07:12.400628  415913 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:07:12.432122  415913 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 23:07:12.432163  415913 cache.go:56] Caching tarball of preloaded images
	I0108 23:07:12.432381  415913 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:07:12.434461  415913 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 23:07:12.435998  415913 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:07:12.466708  415913 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 23:07:16.039897  415913 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:07:16.040013  415913 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:07:17.074029  415913 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 23:07:17.074450  415913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/config.json ...
	I0108 23:07:17.074493  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/config.json: {Name:mk745ad9b2b5c5a152c61eb55ab7fe9fe1c71647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:17.074671  415913 start.go:365] acquiring machines lock for ingress-addon-legacy-132808: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:07:17.074706  415913 start.go:369] acquired machines lock for "ingress-addon-legacy-132808" in 17.4µs
	I0108 23:07:17.074724  415913 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-132808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-132808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:07:17.074827  415913 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 23:07:17.077013  415913 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0108 23:07:17.077180  415913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:07:17.077208  415913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:07:17.093659  415913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0108 23:07:17.094233  415913 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:07:17.094929  415913 main.go:141] libmachine: Using API Version  1
	I0108 23:07:17.094957  415913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:07:17.095388  415913 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:07:17.095606  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetMachineName
	I0108 23:07:17.095798  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:17.095938  415913 start.go:159] libmachine.API.Create for "ingress-addon-legacy-132808" (driver="kvm2")
	I0108 23:07:17.095967  415913 client.go:168] LocalClient.Create starting
	I0108 23:07:17.096008  415913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0108 23:07:17.096053  415913 main.go:141] libmachine: Decoding PEM data...
	I0108 23:07:17.096070  415913 main.go:141] libmachine: Parsing certificate...
	I0108 23:07:17.096133  415913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0108 23:07:17.096154  415913 main.go:141] libmachine: Decoding PEM data...
	I0108 23:07:17.096165  415913 main.go:141] libmachine: Parsing certificate...
	I0108 23:07:17.096180  415913 main.go:141] libmachine: Running pre-create checks...
	I0108 23:07:17.096189  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .PreCreateCheck
	I0108 23:07:17.096581  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetConfigRaw
	I0108 23:07:17.097046  415913 main.go:141] libmachine: Creating machine...
	I0108 23:07:17.097061  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .Create
	I0108 23:07:17.097198  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Creating KVM machine...
	I0108 23:07:17.098687  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found existing default KVM network
	I0108 23:07:17.099418  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:17.099245  415947 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I0108 23:07:17.105810  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | trying to create private KVM network mk-ingress-addon-legacy-132808 192.168.39.0/24...
	I0108 23:07:17.194248  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | private KVM network mk-ingress-addon-legacy-132808 192.168.39.0/24 created
	I0108 23:07:17.194306  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:17.194208  415947 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:07:17.194425  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808 ...
	I0108 23:07:17.194465  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 23:07:17.194493  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 23:07:17.449728  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:17.449531  415947 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa...
	I0108 23:07:17.585567  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:17.585348  415947 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/ingress-addon-legacy-132808.rawdisk...
	I0108 23:07:17.585608  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Writing magic tar header
	I0108 23:07:17.585626  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Writing SSH key tar header
	I0108 23:07:17.585636  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:17.585572  415947 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808 ...
	I0108 23:07:17.585750  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808
	I0108 23:07:17.585776  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0108 23:07:17.585787  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808 (perms=drwx------)
	I0108 23:07:17.585891  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0108 23:07:17.585904  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:07:17.585916  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0108 23:07:17.585928  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0108 23:07:17.585938  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 23:07:17.585949  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home/jenkins
	I0108 23:07:17.585957  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Checking permissions on dir: /home
	I0108 23:07:17.585964  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Skipping /home - not owner
	I0108 23:07:17.585976  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0108 23:07:17.585996  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 23:07:17.586011  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 23:07:17.586020  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Creating domain...
	I0108 23:07:17.587417  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) define libvirt domain using xml: 
	I0108 23:07:17.587452  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) <domain type='kvm'>
	I0108 23:07:17.587465  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <name>ingress-addon-legacy-132808</name>
	I0108 23:07:17.587474  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <memory unit='MiB'>4096</memory>
	I0108 23:07:17.587485  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <vcpu>2</vcpu>
	I0108 23:07:17.587495  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <features>
	I0108 23:07:17.587505  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <acpi/>
	I0108 23:07:17.587516  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <apic/>
	I0108 23:07:17.587532  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <pae/>
	I0108 23:07:17.587547  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     
	I0108 23:07:17.587563  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   </features>
	I0108 23:07:17.587577  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <cpu mode='host-passthrough'>
	I0108 23:07:17.587592  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   
	I0108 23:07:17.587610  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   </cpu>
	I0108 23:07:17.587625  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <os>
	I0108 23:07:17.587645  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <type>hvm</type>
	I0108 23:07:17.587661  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <boot dev='cdrom'/>
	I0108 23:07:17.587676  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <boot dev='hd'/>
	I0108 23:07:17.587721  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <bootmenu enable='no'/>
	I0108 23:07:17.587761  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   </os>
	I0108 23:07:17.587788  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   <devices>
	I0108 23:07:17.587805  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <disk type='file' device='cdrom'>
	I0108 23:07:17.587833  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/boot2docker.iso'/>
	I0108 23:07:17.587855  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <target dev='hdc' bus='scsi'/>
	I0108 23:07:17.587871  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <readonly/>
	I0108 23:07:17.587883  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </disk>
	I0108 23:07:17.587898  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <disk type='file' device='disk'>
	I0108 23:07:17.587914  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 23:07:17.587951  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/ingress-addon-legacy-132808.rawdisk'/>
	I0108 23:07:17.587991  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <target dev='hda' bus='virtio'/>
	I0108 23:07:17.588008  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </disk>
	I0108 23:07:17.588023  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <interface type='network'>
	I0108 23:07:17.588041  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <source network='mk-ingress-addon-legacy-132808'/>
	I0108 23:07:17.588067  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <model type='virtio'/>
	I0108 23:07:17.588084  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </interface>
	I0108 23:07:17.588110  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <interface type='network'>
	I0108 23:07:17.588130  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <source network='default'/>
	I0108 23:07:17.588146  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <model type='virtio'/>
	I0108 23:07:17.588165  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </interface>
	I0108 23:07:17.588185  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <serial type='pty'>
	I0108 23:07:17.588199  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <target port='0'/>
	I0108 23:07:17.588209  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </serial>
	I0108 23:07:17.588231  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <console type='pty'>
	I0108 23:07:17.588248  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <target type='serial' port='0'/>
	I0108 23:07:17.588277  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </console>
	I0108 23:07:17.588299  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     <rng model='virtio'>
	I0108 23:07:17.588317  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)       <backend model='random'>/dev/random</backend>
	I0108 23:07:17.588332  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     </rng>
	I0108 23:07:17.588364  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     
	I0108 23:07:17.588378  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)     
	I0108 23:07:17.588391  415913 main.go:141] libmachine: (ingress-addon-legacy-132808)   </devices>
	I0108 23:07:17.588405  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) </domain>
	I0108 23:07:17.588426  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) 
	I0108 23:07:17.593622  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:11:2d:9f in network default
	I0108 23:07:17.594274  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:17.594291  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Ensuring networks are active...
	I0108 23:07:17.595022  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Ensuring network default is active
	I0108 23:07:17.595307  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Ensuring network mk-ingress-addon-legacy-132808 is active
	I0108 23:07:17.595943  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Getting domain xml...
	I0108 23:07:17.596679  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Creating domain...
	I0108 23:07:18.909507  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Waiting to get IP...
	I0108 23:07:18.910286  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:18.910729  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:18.910792  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:18.910717  415947 retry.go:31] will retry after 235.72916ms: waiting for machine to come up
	I0108 23:07:19.149176  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:19.150025  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:19.150065  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:19.149944  415947 retry.go:31] will retry after 281.228089ms: waiting for machine to come up
	I0108 23:07:19.432561  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:19.433024  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:19.433057  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:19.432996  415947 retry.go:31] will retry after 316.500509ms: waiting for machine to come up
	I0108 23:07:19.751777  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:19.752184  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:19.752204  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:19.752159  415947 retry.go:31] will retry after 583.476042ms: waiting for machine to come up
	I0108 23:07:20.337698  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:20.338112  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:20.338145  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:20.338073  415947 retry.go:31] will retry after 566.975057ms: waiting for machine to come up
	I0108 23:07:20.906940  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:20.907351  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:20.907428  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:20.907310  415947 retry.go:31] will retry after 944.284152ms: waiting for machine to come up
	I0108 23:07:21.853616  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:21.854044  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:21.854095  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:21.853998  415947 retry.go:31] will retry after 898.010326ms: waiting for machine to come up
	I0108 23:07:22.753523  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:22.753962  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:22.753991  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:22.753921  415947 retry.go:31] will retry after 966.815924ms: waiting for machine to come up
	I0108 23:07:23.722495  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:23.722958  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:23.722986  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:23.722904  415947 retry.go:31] will retry after 1.74367738s: waiting for machine to come up
	I0108 23:07:25.468413  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:25.468846  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:25.468906  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:25.468802  415947 retry.go:31] will retry after 2.035065393s: waiting for machine to come up
	I0108 23:07:27.506046  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:27.506603  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:27.506671  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:27.506565  415947 retry.go:31] will retry after 2.359397491s: waiting for machine to come up
	I0108 23:07:29.869206  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:29.869621  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:29.869651  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:29.869564  415947 retry.go:31] will retry after 2.62735315s: waiting for machine to come up
	I0108 23:07:32.499020  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:32.499636  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:32.499682  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:32.499574  415947 retry.go:31] will retry after 4.244415295s: waiting for machine to come up
	I0108 23:07:36.749150  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:36.749629  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find current IP address of domain ingress-addon-legacy-132808 in network mk-ingress-addon-legacy-132808
	I0108 23:07:36.749661  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | I0108 23:07:36.749588  415947 retry.go:31] will retry after 4.923680913s: waiting for machine to come up
	I0108 23:07:41.678614  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.679311  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Found IP for machine: 192.168.39.117
	I0108 23:07:41.679337  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Reserving static IP address...
	I0108 23:07:41.679355  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has current primary IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.679862  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-132808", mac: "52:54:00:9b:7d:70", ip: "192.168.39.117"} in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.769507  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Reserved static IP address: 192.168.39.117
	I0108 23:07:41.769560  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Getting to WaitForSSH function...
	I0108 23:07:41.769572  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Waiting for SSH to be available...
	I0108 23:07:41.772620  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.772991  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:41.773031  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.773228  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Using SSH client type: external
	I0108 23:07:41.773260  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa (-rw-------)
	I0108 23:07:41.773299  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 23:07:41.773316  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | About to run SSH command:
	I0108 23:07:41.773337  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | exit 0
	I0108 23:07:41.871826  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | SSH cmd err, output: <nil>: 
	I0108 23:07:41.872209  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) KVM machine creation complete!
	I0108 23:07:41.872605  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetConfigRaw
	I0108 23:07:41.873270  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:41.873574  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:41.873764  415913 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 23:07:41.873790  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetState
	I0108 23:07:41.875238  415913 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 23:07:41.875256  415913 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 23:07:41.875262  415913 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 23:07:41.875269  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:41.877697  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.878156  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:41.878194  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:41.878344  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:41.878586  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:41.878922  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:41.879135  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:41.879336  415913 main.go:141] libmachine: Using SSH client type: native
	I0108 23:07:41.879775  415913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0108 23:07:41.879793  415913 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 23:07:42.011007  415913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:07:42.011033  415913 main.go:141] libmachine: Detecting the provisioner...
	I0108 23:07:42.011043  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:42.014368  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.014885  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.014922  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.015130  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:42.015407  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.015625  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.015796  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:42.015937  415913 main.go:141] libmachine: Using SSH client type: native
	I0108 23:07:42.016341  415913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0108 23:07:42.016364  415913 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 23:07:42.152831  415913 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 23:07:42.152958  415913 main.go:141] libmachine: found compatible host: buildroot
	I0108 23:07:42.152970  415913 main.go:141] libmachine: Provisioning with buildroot...
	I0108 23:07:42.152980  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetMachineName
	I0108 23:07:42.153315  415913 buildroot.go:166] provisioning hostname "ingress-addon-legacy-132808"
	I0108 23:07:42.153351  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetMachineName
	I0108 23:07:42.153560  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:42.156358  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.156728  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.156774  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.156923  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:42.157117  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.157308  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.157456  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:42.157638  415913 main.go:141] libmachine: Using SSH client type: native
	I0108 23:07:42.157977  415913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0108 23:07:42.157993  415913 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-132808 && echo "ingress-addon-legacy-132808" | sudo tee /etc/hostname
	I0108 23:07:42.308267  415913 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-132808
	
	I0108 23:07:42.308312  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:42.311217  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.311598  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.311633  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.311831  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:42.312061  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.312249  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.312347  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:42.312517  415913 main.go:141] libmachine: Using SSH client type: native
	I0108 23:07:42.312889  415913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0108 23:07:42.312910  415913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-132808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-132808/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-132808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:07:42.459009  415913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:07:42.459045  415913 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:07:42.459066  415913 buildroot.go:174] setting up certificates
	I0108 23:07:42.459077  415913 provision.go:83] configureAuth start
	I0108 23:07:42.459088  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetMachineName
	I0108 23:07:42.459447  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetIP
	I0108 23:07:42.462657  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.463106  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.463145  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.463273  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:42.465531  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.465853  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.465881  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.466050  415913 provision.go:138] copyHostCerts
	I0108 23:07:42.466082  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:07:42.466129  415913 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:07:42.466138  415913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:07:42.466208  415913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:07:42.466296  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:07:42.466316  415913 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:07:42.466323  415913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:07:42.466347  415913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:07:42.466389  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:07:42.466407  415913 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:07:42.466413  415913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:07:42.466433  415913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:07:42.466480  415913 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-132808 san=[192.168.39.117 192.168.39.117 localhost 127.0.0.1 minikube ingress-addon-legacy-132808]
	I0108 23:07:42.640587  415913 provision.go:172] copyRemoteCerts
	I0108 23:07:42.640683  415913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:07:42.640721  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:42.643908  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.644247  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.644273  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.644576  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:42.644819  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.645010  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:42.645182  415913 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa Username:docker}
	I0108 23:07:42.743938  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:07:42.744020  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:07:42.770864  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:07:42.770943  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 23:07:42.798355  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:07:42.798450  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:07:42.827579  415913 provision.go:86] duration metric: configureAuth took 368.486284ms
	I0108 23:07:42.827614  415913 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:07:42.827836  415913 config.go:182] Loaded profile config "ingress-addon-legacy-132808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 23:07:42.827941  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:42.830816  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.831123  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:42.831148  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:42.831422  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:42.831623  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.831803  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:42.832002  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:42.832196  415913 main.go:141] libmachine: Using SSH client type: native
	I0108 23:07:42.832685  415913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0108 23:07:42.832712  415913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:07:43.181587  415913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:07:43.181636  415913 main.go:141] libmachine: Checking connection to Docker...
	I0108 23:07:43.181662  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetURL
	I0108 23:07:43.183084  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Using libvirt version 6000000
	I0108 23:07:43.185514  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.185953  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.185981  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.186175  415913 main.go:141] libmachine: Docker is up and running!
	I0108 23:07:43.186188  415913 main.go:141] libmachine: Reticulating splines...
	I0108 23:07:43.186195  415913 client.go:171] LocalClient.Create took 26.090220949s
	I0108 23:07:43.186222  415913 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-132808" took 26.090286462s
	I0108 23:07:43.186238  415913 start.go:300] post-start starting for "ingress-addon-legacy-132808" (driver="kvm2")
	I0108 23:07:43.186254  415913 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:07:43.186278  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:43.186537  415913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:07:43.186573  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:43.189063  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.189394  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.189420  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.189567  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:43.189782  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:43.189940  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:43.190066  415913 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa Username:docker}
	I0108 23:07:43.284969  415913 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:07:43.289874  415913 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:07:43.289912  415913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:07:43.290018  415913 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:07:43.290111  415913 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:07:43.290127  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /etc/ssl/certs/4070942.pem
	I0108 23:07:43.290231  415913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:07:43.299965  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:07:43.324789  415913 start.go:303] post-start completed in 138.533585ms
	I0108 23:07:43.324843  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetConfigRaw
	I0108 23:07:43.325599  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetIP
	I0108 23:07:43.328346  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.328713  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.328743  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.329000  415913 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/config.json ...
	I0108 23:07:43.329223  415913 start.go:128] duration metric: createHost completed in 26.254384235s
	I0108 23:07:43.329253  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:43.331709  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.331996  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.332023  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.332183  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:43.332413  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:43.332595  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:43.332756  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:43.332898  415913 main.go:141] libmachine: Using SSH client type: native
	I0108 23:07:43.333230  415913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0108 23:07:43.333242  415913 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:07:43.468396  415913 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704755263.450209993
	
	I0108 23:07:43.468420  415913 fix.go:206] guest clock: 1704755263.450209993
	I0108 23:07:43.468427  415913 fix.go:219] Guest: 2024-01-08 23:07:43.450209993 +0000 UTC Remote: 2024-01-08 23:07:43.329238254 +0000 UTC m=+31.066907358 (delta=120.971739ms)
	I0108 23:07:43.468469  415913 fix.go:190] guest clock delta is within tolerance: 120.971739ms
	I0108 23:07:43.468487  415913 start.go:83] releasing machines lock for "ingress-addon-legacy-132808", held for 26.393770804s
	I0108 23:07:43.468520  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:43.468824  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetIP
	I0108 23:07:43.471704  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.472042  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.472071  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.472301  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:43.472890  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:43.473083  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:07:43.473187  415913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:07:43.473256  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:43.473310  415913 ssh_runner.go:195] Run: cat /version.json
	I0108 23:07:43.473335  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:07:43.475913  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.476155  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.476255  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.476289  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.476428  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:43.476501  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:43.476528  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:43.476637  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:43.476728  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:07:43.476791  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:43.476881  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:07:43.476959  415913 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa Username:docker}
	I0108 23:07:43.477000  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:07:43.477146  415913 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa Username:docker}
	I0108 23:07:43.569480  415913 ssh_runner.go:195] Run: systemctl --version
	I0108 23:07:43.596716  415913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:07:43.766779  415913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 23:07:43.773502  415913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:07:43.773594  415913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:07:43.790774  415913 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:07:43.790805  415913 start.go:475] detecting cgroup driver to use...
	I0108 23:07:43.790892  415913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:07:43.805057  415913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:07:43.819019  415913 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:07:43.819111  415913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:07:43.833390  415913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:07:43.847981  415913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:07:43.957131  415913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:07:44.081568  415913 docker.go:219] disabling docker service ...
	I0108 23:07:44.081650  415913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:07:44.097602  415913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:07:44.110924  415913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:07:44.231384  415913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:07:44.354101  415913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:07:44.368066  415913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:07:44.388789  415913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:07:44.388857  415913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:07:44.399123  415913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:07:44.399235  415913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:07:44.410329  415913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:07:44.421470  415913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:07:44.431928  415913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:07:44.442684  415913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:07:44.452564  415913 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:07:44.452645  415913 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 23:07:44.466307  415913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:07:44.476855  415913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:07:44.596830  415913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:07:44.778216  415913 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:07:44.778322  415913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:07:44.784439  415913 start.go:543] Will wait 60s for crictl version
	I0108 23:07:44.784533  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:44.788854  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:07:44.832022  415913 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:07:44.832107  415913 ssh_runner.go:195] Run: crio --version
	I0108 23:07:44.883477  415913 ssh_runner.go:195] Run: crio --version
	I0108 23:07:44.937997  415913 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0108 23:07:44.939488  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetIP
	I0108 23:07:44.942870  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:44.943303  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:07:44.943340  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:07:44.943716  415913 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:07:44.948679  415913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:07:44.964097  415913 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 23:07:44.964194  415913 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:07:45.006356  415913 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 23:07:45.006474  415913 ssh_runner.go:195] Run: which lz4
	I0108 23:07:45.011526  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 23:07:45.011635  415913 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 23:07:45.015929  415913 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:07:45.015974  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0108 23:07:47.209886  415913 crio.go:444] Took 2.198280 seconds to copy over tarball
	I0108 23:07:47.209994  415913 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 23:07:50.530838  415913 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320806115s)
	I0108 23:07:50.530875  415913 crio.go:451] Took 3.320956 seconds to extract the tarball
	I0108 23:07:50.530887  415913 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 23:07:50.576483  415913 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:07:50.635123  415913 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 23:07:50.635156  415913 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 23:07:50.635253  415913 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:07:50.635293  415913 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:07:50.635320  415913 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:07:50.635390  415913 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 23:07:50.635455  415913 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 23:07:50.635471  415913 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:07:50.635506  415913 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:07:50.635602  415913 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:07:50.636832  415913 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:07:50.636862  415913 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:07:50.636831  415913 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 23:07:50.636839  415913 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:07:50.636954  415913 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:07:50.637033  415913 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:07:50.637078  415913 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 23:07:50.637182  415913 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:07:50.806927  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:07:50.812718  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:07:50.831023  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 23:07:50.838097  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:07:50.852937  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 23:07:50.856251  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 23:07:50.856862  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:07:50.875709  415913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:07:50.936021  415913 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 23:07:50.936065  415913 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:07:50.936108  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.013956  415913 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 23:07:51.014035  415913 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 23:07:51.014079  415913 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:07:51.014041  415913 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 23:07:51.014221  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.014166  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.036853  415913 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 23:07:51.036906  415913 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:07:51.036949  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.059466  415913 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 23:07:51.059490  415913 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 23:07:51.059546  415913 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:07:51.059549  415913 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:07:51.059563  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 23:07:51.059578  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.059594  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.059639  415913 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 23:07:51.059673  415913 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 23:07:51.059702  415913 ssh_runner.go:195] Run: which crictl
	I0108 23:07:51.059716  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 23:07:51.059702  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 23:07:51.059748  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 23:07:51.158858  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 23:07:51.158895  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 23:07:51.176849  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 23:07:51.176932  415913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 23:07:51.176969  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 23:07:51.176986  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 23:07:51.177013  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 23:07:51.227868  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 23:07:51.256390  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 23:07:51.256434  415913 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 23:07:51.256495  415913 cache_images.go:92] LoadImages completed in 621.320061ms
	W0108 23:07:51.256599  415913 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0108 23:07:51.256667  415913 ssh_runner.go:195] Run: crio config
	I0108 23:07:51.323747  415913 cni.go:84] Creating CNI manager for ""
	I0108 23:07:51.323781  415913 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 23:07:51.323807  415913 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:07:51.323835  415913 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.117 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-132808 NodeName:ingress-addon-legacy-132808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 23:07:51.324019  415913 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-132808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:07:51.324129  415913 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-132808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-132808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:07:51.324205  415913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 23:07:51.333652  415913 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:07:51.333737  415913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 23:07:51.343500  415913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0108 23:07:51.364301  415913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 23:07:51.384665  415913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0108 23:07:51.404491  415913 ssh_runner.go:195] Run: grep 192.168.39.117	control-plane.minikube.internal$ /etc/hosts
	I0108 23:07:51.409058  415913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:07:51.423824  415913 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808 for IP: 192.168.39.117
	I0108 23:07:51.423859  415913 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.424024  415913 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:07:51.424079  415913 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:07:51.424125  415913 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.key
	I0108 23:07:51.424143  415913 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt with IP's: []
	I0108 23:07:51.513957  415913 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt ...
	I0108 23:07:51.513994  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: {Name:mk8d7077b9750380f14ead4fde18d1efb692628a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.514220  415913 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.key ...
	I0108 23:07:51.514245  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.key: {Name:mkda4be4db9197dd876d222cc35b92b4e9bb2a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.514363  415913 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key.55b7bb41
	I0108 23:07:51.514381  415913 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt.55b7bb41 with IP's: [192.168.39.117 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 23:07:51.770047  415913 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt.55b7bb41 ...
	I0108 23:07:51.770086  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt.55b7bb41: {Name:mk7def3add9e84b020f456854a4aa6b062b698a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.770293  415913 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key.55b7bb41 ...
	I0108 23:07:51.770316  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key.55b7bb41: {Name:mkeeb59bd6f051b8eb9bcbca6dc8a1b873c8b22c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.770437  415913 certs.go:337] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt.55b7bb41 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt
	I0108 23:07:51.770540  415913 certs.go:341] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key.55b7bb41 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key
	I0108 23:07:51.770624  415913 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.key
	I0108 23:07:51.770640  415913 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.crt with IP's: []
	I0108 23:07:51.826986  415913 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.crt ...
	I0108 23:07:51.827030  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.crt: {Name:mke8f8433d9d88ec5bc5117cdba8d83988235245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.827290  415913 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.key ...
	I0108 23:07:51.827314  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.key: {Name:mk8a0cabb23795f0c161f294181afa5645f765e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:07:51.827464  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 23:07:51.827543  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 23:07:51.827560  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 23:07:51.827571  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 23:07:51.827581  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:07:51.827593  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:07:51.827603  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:07:51.827623  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:07:51.827691  415913 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:07:51.827752  415913 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:07:51.827767  415913 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:07:51.827793  415913 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:07:51.827819  415913 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:07:51.827851  415913 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:07:51.827918  415913 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:07:51.827958  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:07:51.827974  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem -> /usr/share/ca-certificates/407094.pem
	I0108 23:07:51.827988  415913 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /usr/share/ca-certificates/4070942.pem
	I0108 23:07:51.828837  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 23:07:51.861652  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 23:07:51.888023  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 23:07:51.918057  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 23:07:51.949037  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:07:51.977329  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:07:52.005397  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:07:52.033150  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:07:52.060138  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:07:52.086374  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:07:52.113130  415913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:07:52.138824  415913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 23:07:52.157275  415913 ssh_runner.go:195] Run: openssl version
	I0108 23:07:52.164317  415913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:07:52.176513  415913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:07:52.181647  415913 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:07:52.181727  415913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:07:52.187839  415913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:07:52.200516  415913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:07:52.211713  415913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:07:52.217407  415913 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:07:52.217503  415913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:07:52.224218  415913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:07:52.235474  415913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:07:52.246633  415913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:07:52.251933  415913 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:07:52.252011  415913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:07:52.257904  415913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:07:52.269961  415913 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:07:52.274817  415913 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:07:52.274923  415913 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-132808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-132808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:07:52.275031  415913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 23:07:52.275157  415913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:07:52.317690  415913 cri.go:89] found id: ""
	I0108 23:07:52.317828  415913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 23:07:52.328927  415913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 23:07:52.339167  415913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 23:07:52.349797  415913 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:07:52.349846  415913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0108 23:07:52.411052  415913 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 23:07:52.411473  415913 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 23:07:52.564501  415913 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 23:07:52.564644  415913 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 23:07:52.564775  415913 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 23:07:52.800490  415913 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:07:52.800582  415913 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:07:52.800617  415913 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 23:07:52.950517  415913 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:07:52.952726  415913 out.go:204]   - Generating certificates and keys ...
	I0108 23:07:52.952897  415913 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 23:07:52.953010  415913 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 23:07:53.111156  415913 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 23:07:53.188663  415913 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 23:07:53.567338  415913 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 23:07:53.708421  415913 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 23:07:54.157541  415913 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 23:07:54.157775  415913 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-132808 localhost] and IPs [192.168.39.117 127.0.0.1 ::1]
	I0108 23:07:54.368584  415913 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 23:07:54.369031  415913 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-132808 localhost] and IPs [192.168.39.117 127.0.0.1 ::1]
	I0108 23:07:54.443647  415913 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 23:07:54.614591  415913 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 23:07:54.813647  415913 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 23:07:54.813717  415913 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:07:54.951648  415913 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:07:55.223966  415913 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:07:55.400822  415913 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:07:55.579519  415913 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:07:55.580159  415913 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:07:55.582227  415913 out.go:204]   - Booting up control plane ...
	I0108 23:07:55.582350  415913 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:07:55.591695  415913 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:07:55.592229  415913 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:07:55.593045  415913 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:07:55.595232  415913 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 23:08:05.098258  415913 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503898 seconds
	I0108 23:08:05.098425  415913 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 23:08:05.117513  415913 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 23:08:05.643659  415913 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 23:08:05.643888  415913 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-132808 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 23:08:06.155776  415913 kubeadm.go:322] [bootstrap-token] Using token: ka4sup.j4n9aam4mi1o5yym
	I0108 23:08:06.157242  415913 out.go:204]   - Configuring RBAC rules ...
	I0108 23:08:06.157405  415913 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 23:08:06.166125  415913 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 23:08:06.176223  415913 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 23:08:06.180555  415913 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 23:08:06.184044  415913 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 23:08:06.194128  415913 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 23:08:06.209287  415913 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 23:08:06.495628  415913 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 23:08:06.617922  415913 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 23:08:06.617946  415913 kubeadm.go:322] 
	I0108 23:08:06.618018  415913 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 23:08:06.618045  415913 kubeadm.go:322] 
	I0108 23:08:06.618122  415913 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 23:08:06.618130  415913 kubeadm.go:322] 
	I0108 23:08:06.618151  415913 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 23:08:06.618205  415913 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 23:08:06.618249  415913 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 23:08:06.618259  415913 kubeadm.go:322] 
	I0108 23:08:06.618309  415913 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 23:08:06.618392  415913 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 23:08:06.618483  415913 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 23:08:06.618498  415913 kubeadm.go:322] 
	I0108 23:08:06.618602  415913 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 23:08:06.618699  415913 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 23:08:06.618715  415913 kubeadm.go:322] 
	I0108 23:08:06.618821  415913 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ka4sup.j4n9aam4mi1o5yym \
	I0108 23:08:06.618964  415913 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0108 23:08:06.619002  415913 kubeadm.go:322]     --control-plane 
	I0108 23:08:06.619012  415913 kubeadm.go:322] 
	I0108 23:08:06.619127  415913 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 23:08:06.619138  415913 kubeadm.go:322] 
	I0108 23:08:06.619262  415913 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ka4sup.j4n9aam4mi1o5yym \
	I0108 23:08:06.619428  415913 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 23:08:06.619713  415913 kubeadm.go:322] W0108 23:07:52.403231     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 23:08:06.619866  415913 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:08:06.620026  415913 kubeadm.go:322] W0108 23:07:55.584763     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 23:08:06.620222  415913 kubeadm.go:322] W0108 23:07:55.586064     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 23:08:06.620265  415913 cni.go:84] Creating CNI manager for ""
	I0108 23:08:06.620278  415913 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 23:08:06.622091  415913 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 23:08:06.623714  415913 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 23:08:06.634374  415913 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 23:08:06.654903  415913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 23:08:06.654977  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:06.655050  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=ingress-addon-legacy-132808 minikube.k8s.io/updated_at=2024_01_08T23_08_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:06.831459  415913 ops.go:34] apiserver oom_adj: -16
	I0108 23:08:06.831652  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:07.332684  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:07.831767  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:08.331955  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:08.831939  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:09.332643  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:09.831971  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:10.332815  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:10.831842  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:11.332734  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:11.831975  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:12.332102  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:12.831756  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:13.332124  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:13.832755  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:14.332451  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:14.832056  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:15.332457  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:15.832533  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:16.332564  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:16.832464  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:17.332188  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:17.832454  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:18.332179  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:18.831730  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:19.332344  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:19.832015  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:20.331995  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:20.831933  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:21.331720  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:21.832404  415913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:08:22.198658  415913 kubeadm.go:1088] duration metric: took 15.5437379s to wait for elevateKubeSystemPrivileges.
	I0108 23:08:22.198703  415913 kubeadm.go:406] StartCluster complete in 29.923790865s
	I0108 23:08:22.198725  415913 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:08:22.198844  415913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:08:22.199741  415913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:08:22.200086  415913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 23:08:22.200225  415913 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 23:08:22.200312  415913 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-132808"
	I0108 23:08:22.200410  415913 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-132808"
	I0108 23:08:22.200443  415913 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-132808"
	I0108 23:08:22.200476  415913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-132808"
	I0108 23:08:22.200535  415913 host.go:66] Checking if "ingress-addon-legacy-132808" exists ...
	I0108 23:08:22.200663  415913 config.go:182] Loaded profile config "ingress-addon-legacy-132808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 23:08:22.201080  415913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:08:22.201147  415913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:08:22.201094  415913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:08:22.201252  415913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:08:22.201153  415913 kapi.go:59] client config for ingress-addon-legacy-132808: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:08:22.202141  415913 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 23:08:22.218759  415913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I0108 23:08:22.218759  415913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I0108 23:08:22.219349  415913 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:08:22.219380  415913 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:08:22.219905  415913 main.go:141] libmachine: Using API Version  1
	I0108 23:08:22.219910  415913 main.go:141] libmachine: Using API Version  1
	I0108 23:08:22.219934  415913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:08:22.219937  415913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:08:22.220281  415913 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:08:22.220310  415913 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:08:22.220517  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetState
	I0108 23:08:22.220867  415913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:08:22.220911  415913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:08:22.223387  415913 kapi.go:59] client config for ingress-addon-legacy-132808: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:08:22.223815  415913 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-132808"
	I0108 23:08:22.223872  415913 host.go:66] Checking if "ingress-addon-legacy-132808" exists ...
	I0108 23:08:22.224337  415913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:08:22.224376  415913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:08:22.237878  415913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0108 23:08:22.238390  415913 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:08:22.238957  415913 main.go:141] libmachine: Using API Version  1
	I0108 23:08:22.238984  415913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:08:22.239401  415913 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:08:22.239618  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetState
	I0108 23:08:22.240468  415913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0108 23:08:22.240835  415913 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:08:22.241341  415913 main.go:141] libmachine: Using API Version  1
	I0108 23:08:22.241364  415913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:08:22.241882  415913 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:08:22.241980  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:08:22.244601  415913 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:08:22.242496  415913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:08:22.246189  415913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:08:22.246412  415913 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:08:22.246428  415913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 23:08:22.246457  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:08:22.250163  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:08:22.250545  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:08:22.250648  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:08:22.251008  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:08:22.251174  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:08:22.251336  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:08:22.251475  415913 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa Username:docker}
	I0108 23:08:22.261643  415913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36723
	I0108 23:08:22.262128  415913 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:08:22.262657  415913 main.go:141] libmachine: Using API Version  1
	I0108 23:08:22.262681  415913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:08:22.263103  415913 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:08:22.263323  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetState
	I0108 23:08:22.265077  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .DriverName
	I0108 23:08:22.265375  415913 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 23:08:22.265396  415913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 23:08:22.265417  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHHostname
	I0108 23:08:22.268577  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:08:22.269164  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:7d:70", ip: ""} in network mk-ingress-addon-legacy-132808: {Iface:virbr1 ExpiryTime:2024-01-09 00:07:33 +0000 UTC Type:0 Mac:52:54:00:9b:7d:70 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:ingress-addon-legacy-132808 Clientid:01:52:54:00:9b:7d:70}
	I0108 23:08:22.269194  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | domain ingress-addon-legacy-132808 has defined IP address 192.168.39.117 and MAC address 52:54:00:9b:7d:70 in network mk-ingress-addon-legacy-132808
	I0108 23:08:22.269391  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHPort
	I0108 23:08:22.269665  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHKeyPath
	I0108 23:08:22.269843  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .GetSSHUsername
	I0108 23:08:22.270017  415913 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/ingress-addon-legacy-132808/id_rsa Username:docker}
	I0108 23:08:22.476277  415913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 23:08:22.490916  415913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:08:22.670140  415913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 23:08:22.898040  415913 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-132808" context rescaled to 1 replicas
	I0108 23:08:22.898088  415913 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:08:22.900301  415913 out.go:177] * Verifying Kubernetes components...
	I0108 23:08:22.902106  415913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:08:23.343408  415913 main.go:141] libmachine: Making call to close driver server
	I0108 23:08:23.343444  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .Close
	I0108 23:08:23.343755  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Closing plugin on server side
	I0108 23:08:23.343791  415913 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:08:23.343817  415913 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:08:23.343833  415913 main.go:141] libmachine: Making call to close driver server
	I0108 23:08:23.343844  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .Close
	I0108 23:08:23.344105  415913 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:08:23.344137  415913 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:08:23.344168  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Closing plugin on server side
	I0108 23:08:23.382773  415913 main.go:141] libmachine: Making call to close driver server
	I0108 23:08:23.382807  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .Close
	I0108 23:08:23.383269  415913 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:08:23.383292  415913 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:08:23.383314  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Closing plugin on server side
	I0108 23:08:23.455385  415913 main.go:141] libmachine: Making call to close driver server
	I0108 23:08:23.455417  415913 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 23:08:23.455437  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .Close
	I0108 23:08:23.455859  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) DBG | Closing plugin on server side
	I0108 23:08:23.455915  415913 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:08:23.455927  415913 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:08:23.455939  415913 main.go:141] libmachine: Making call to close driver server
	I0108 23:08:23.455950  415913 main.go:141] libmachine: (ingress-addon-legacy-132808) Calling .Close
	I0108 23:08:23.456252  415913 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:08:23.456278  415913 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:08:23.458347  415913 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 23:08:23.456466  415913 kapi.go:59] client config for ingress-addon-legacy-132808: &rest.Config{Host:"https://192.168.39.117:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]
uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:08:23.460419  415913 addons.go:508] enable addons completed in 1.260206952s: enabled=[default-storageclass storage-provisioner]
	I0108 23:08:23.460706  415913 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-132808" to be "Ready" ...
	I0108 23:08:23.475265  415913 node_ready.go:49] node "ingress-addon-legacy-132808" has status "Ready":"True"
	I0108 23:08:23.475312  415913 node_ready.go:38] duration metric: took 14.55753ms waiting for node "ingress-addon-legacy-132808" to be "Ready" ...
	I0108 23:08:23.475332  415913 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:08:23.484033  415913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4px8c" in "kube-system" namespace to be "Ready" ...
	I0108 23:08:25.491967  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:27.494086  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:29.992374  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:32.493311  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:34.992482  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:37.494165  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:39.991688  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:41.993021  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:44.491594  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:46.492463  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:48.992855  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:51.491900  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:53.991347  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:56.490977  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:58.491255  415913 pod_ready.go:102] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"False"
	I0108 23:08:59.992218  415913 pod_ready.go:92] pod "coredns-66bff467f8-4px8c" in "kube-system" namespace has status "Ready":"True"
	I0108 23:08:59.992247  415913 pod_ready.go:81] duration metric: took 36.508178182s waiting for pod "coredns-66bff467f8-4px8c" in "kube-system" namespace to be "Ready" ...
	I0108 23:08:59.992257  415913 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4xk8h" in "kube-system" namespace to be "Ready" ...
	I0108 23:08:59.994239  415913 pod_ready.go:97] error getting pod "coredns-66bff467f8-4xk8h" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-4xk8h" not found
	I0108 23:08:59.994279  415913 pod_ready.go:81] duration metric: took 2.014451ms waiting for pod "coredns-66bff467f8-4xk8h" in "kube-system" namespace to be "Ready" ...
	E0108 23:08:59.994296  415913 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-4xk8h" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-4xk8h" not found
	I0108 23:08:59.994306  415913 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:08:59.999726  415913 pod_ready.go:92] pod "etcd-ingress-addon-legacy-132808" in "kube-system" namespace has status "Ready":"True"
	I0108 23:08:59.999758  415913 pod_ready.go:81] duration metric: took 5.441793ms waiting for pod "etcd-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:08:59.999771  415913 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.004921  415913 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-132808" in "kube-system" namespace has status "Ready":"True"
	I0108 23:09:00.004943  415913 pod_ready.go:81] duration metric: took 5.163916ms waiting for pod "kube-apiserver-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.004951  415913 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.010630  415913 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-132808" in "kube-system" namespace has status "Ready":"True"
	I0108 23:09:00.010654  415913 pod_ready.go:81] duration metric: took 5.696903ms waiting for pod "kube-controller-manager-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.010663  415913 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hq95h" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.185798  415913 request.go:629] Waited for 172.37115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/nodes/ingress-addon-legacy-132808
	I0108 23:09:00.189417  415913 pod_ready.go:92] pod "kube-proxy-hq95h" in "kube-system" namespace has status "Ready":"True"
	I0108 23:09:00.189450  415913 pod_ready.go:81] duration metric: took 178.778429ms waiting for pod "kube-proxy-hq95h" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.189464  415913 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.384893  415913 request.go:629] Waited for 195.316611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-132808
	I0108 23:09:00.585739  415913 request.go:629] Waited for 197.444696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/nodes/ingress-addon-legacy-132808
	I0108 23:09:00.589880  415913 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-132808" in "kube-system" namespace has status "Ready":"True"
	I0108 23:09:00.589907  415913 pod_ready.go:81] duration metric: took 400.43505ms waiting for pod "kube-scheduler-ingress-addon-legacy-132808" in "kube-system" namespace to be "Ready" ...
	I0108 23:09:00.589923  415913 pod_ready.go:38] duration metric: took 37.1145779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:09:00.589943  415913 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:09:00.590009  415913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:09:00.607205  415913 api_server.go:72] duration metric: took 37.709082499s to wait for apiserver process to appear ...
	I0108 23:09:00.607238  415913 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:09:00.607267  415913 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0108 23:09:00.613354  415913 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0108 23:09:00.614217  415913 api_server.go:141] control plane version: v1.18.20
	I0108 23:09:00.614246  415913 api_server.go:131] duration metric: took 7.000298ms to wait for apiserver health ...
	I0108 23:09:00.614257  415913 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:09:00.785764  415913 request.go:629] Waited for 171.406793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/namespaces/kube-system/pods
	I0108 23:09:00.791912  415913 system_pods.go:59] 7 kube-system pods found
	I0108 23:09:00.791944  415913 system_pods.go:61] "coredns-66bff467f8-4px8c" [75f49cb3-f576-4eff-977f-7aa21a8c5810] Running
	I0108 23:09:00.791949  415913 system_pods.go:61] "etcd-ingress-addon-legacy-132808" [e1bc783d-bb80-4f59-86eb-7bb9d4352e52] Running
	I0108 23:09:00.791953  415913 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-132808" [f0668573-8e1f-4810-86bb-173ac0c17bf1] Running
	I0108 23:09:00.791966  415913 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-132808" [20ad4591-d2dc-44eb-879f-a687c3cc0e24] Running
	I0108 23:09:00.791970  415913 system_pods.go:61] "kube-proxy-hq95h" [9a6bfeaa-c475-407a-ae60-dbbbababae82] Running
	I0108 23:09:00.791974  415913 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-132808" [c9f08c83-25e7-4605-ae1a-464791ff68e6] Running
	I0108 23:09:00.791977  415913 system_pods.go:61] "storage-provisioner" [d496ec5d-4746-41a3-bf86-0f0174ae521c] Running
	I0108 23:09:00.791985  415913 system_pods.go:74] duration metric: took 177.720552ms to wait for pod list to return data ...
	I0108 23:09:00.791993  415913 default_sa.go:34] waiting for default service account to be created ...
	I0108 23:09:00.985497  415913 request.go:629] Waited for 193.386372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:09:00.989069  415913 default_sa.go:45] found service account: "default"
	I0108 23:09:00.989110  415913 default_sa.go:55] duration metric: took 197.109231ms for default service account to be created ...
	I0108 23:09:00.989123  415913 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 23:09:01.185632  415913 request.go:629] Waited for 196.420974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/namespaces/kube-system/pods
	I0108 23:09:01.191584  415913 system_pods.go:86] 7 kube-system pods found
	I0108 23:09:01.191613  415913 system_pods.go:89] "coredns-66bff467f8-4px8c" [75f49cb3-f576-4eff-977f-7aa21a8c5810] Running
	I0108 23:09:01.191618  415913 system_pods.go:89] "etcd-ingress-addon-legacy-132808" [e1bc783d-bb80-4f59-86eb-7bb9d4352e52] Running
	I0108 23:09:01.191622  415913 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-132808" [f0668573-8e1f-4810-86bb-173ac0c17bf1] Running
	I0108 23:09:01.191626  415913 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-132808" [20ad4591-d2dc-44eb-879f-a687c3cc0e24] Running
	I0108 23:09:01.191630  415913 system_pods.go:89] "kube-proxy-hq95h" [9a6bfeaa-c475-407a-ae60-dbbbababae82] Running
	I0108 23:09:01.191638  415913 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-132808" [c9f08c83-25e7-4605-ae1a-464791ff68e6] Running
	I0108 23:09:01.191642  415913 system_pods.go:89] "storage-provisioner" [d496ec5d-4746-41a3-bf86-0f0174ae521c] Running
	I0108 23:09:01.191648  415913 system_pods.go:126] duration metric: took 202.519405ms to wait for k8s-apps to be running ...
	I0108 23:09:01.191656  415913 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:09:01.191705  415913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:09:01.205803  415913 system_svc.go:56] duration metric: took 14.133484ms WaitForService to wait for kubelet.
	I0108 23:09:01.205839  415913 kubeadm.go:581] duration metric: took 38.307726s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:09:01.205865  415913 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:09:01.385366  415913 request.go:629] Waited for 179.393507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.117:8443/api/v1/nodes
	I0108 23:09:01.388735  415913 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:09:01.388779  415913 node_conditions.go:123] node cpu capacity is 2
	I0108 23:09:01.388799  415913 node_conditions.go:105] duration metric: took 182.92303ms to run NodePressure ...
	I0108 23:09:01.388815  415913 start.go:228] waiting for startup goroutines ...
	I0108 23:09:01.388829  415913 start.go:233] waiting for cluster config update ...
	I0108 23:09:01.388842  415913 start.go:242] writing updated cluster config ...
	I0108 23:09:01.389221  415913 ssh_runner.go:195] Run: rm -f paused
	I0108 23:09:01.438231  415913 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 23:09:01.440579  415913 out.go:177] 
	W0108 23:09:01.442210  415913 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 23:09:01.443716  415913 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 23:09:01.445314  415913 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-132808" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 23:07:30 UTC, ends at Mon 2024-01-08 23:12:15 UTC. --
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.549503709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=15574210-1e79-4f43-b7a6-c83f8f1fd1fb name=/runtime.v1.RuntimeService/Version
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.551726262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=549253b0-d6d5-4c4e-bbb5-794cbc5f2c36 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.552238316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755535552225196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=549253b0-d6d5-4c4e-bbb5-794cbc5f2c36 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.552857970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e82aff48-ea80-42f2-99ff-68dc30c581db name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.552934642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e82aff48-ea80-42f2-99ff-68dc30c581db name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.553202736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5766b79193bed5f818df2a814765119a527f76e8c34071709c008a7abca169b0,PodSandboxId:19638132cacbec7b7463a50634e6a59b534149947db5191bc786984e94b640ed,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704755515907149268,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-2gzzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4384d9d-020a-4c0c-8893-f778e4b2815a,},Annotations:map[string]string{io.kubernetes.container.hash: e0a92b21,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b753d042355cb38061452d8c683397578447b8d59e70e3baf01f437b1d8f2f,PodSandboxId:5c76d1dc9275a6c23d77ab232e01ab868a06ef524885640c774dc7a04eccaeaa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704755373971929517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a359bd4b-8452-4dd0-914f-38e7a1fc5591,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 40daaad,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70a480bc7a3cf72e1563349fe51fd9a10699f04ff4f8cc19aae139562179673,PodSandboxId:1ce11d9240abd253415ed4ad6a678ea2f3a49410fbf55601f847b351b36d9643,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704755357770989303,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-59rrn,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de4d1faf-86b2-4232-92de-a596c9d59f89,},Annotations:map[string]string{io.kubernetes.container.hash: ef08225,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d06c1b3c444a9695f91d019326e1354f28e05d1caf84d1e14ee835fcf7f399c3,PodSandboxId:38a7eed4c9f798e607a9a7714250672e25e69b2951d7452bb40a069feec52e65,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea5
8fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347600779423,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rvbr6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aaae065a-0358-4ca2-b96e-a0b84c2bcf8d,},Annotations:map[string]string{io.kubernetes.container.hash: a70eb15a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ba62768c5822d1faec9102a4c3f4ec70f2eedaee7ca63cefee9f99e738701a,PodSandboxId:003dcf4a1bed92072d0636fd343a357bca319fb0677bf4aed17f8dced5964bb6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-c
ertgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347105575301,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q7twx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b59fd534-0e4b-45c3-80dd-57e6edf44860,},Annotations:map[string]string{io.kubernetes.container.hash: bff672ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8440de4e23da446cb7725985789e5036fc7742add70f42bf0a8742591fbe46,PodSandboxId:2c20e0e183247953cc390fef1fc5b843c0ea10d2ce885c0735a1eab9d232dd0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:
&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755304167900681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496ec5d-4746-41a3-bf86-0f0174ae521c,},Annotations:map[string]string{io.kubernetes.container.hash: 595c420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fe4d55df168b3b406ef0205950a52a47de6d24bd77cc8d7701d4d528c4d0dd,PodSandboxId:d592403826fd94b21795f4e78cdaf90d14f1a8a11d2ad9fc470d369236711e4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704755303675931448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hq95h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a6bfeaa-c475-407a-ae60-dbbbababae82,},Annotations:map[string]string{io.kubernetes.container.hash: ad5abff1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66bd689a24ad3e1755446c918e5c09a0f9de3104f07bead63d1498e997ff31c3,PodSandboxId:97d398544cad87641ec253769d834a483781c13da75c45b15f7deb5122a80259,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704755302413495950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4px8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f49cb3-f576-4eff-977f-7aa21a8c5810,},Annotations:map[string]string{io.kubernetes.container.hash: 67b748ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e382a1e2026f80efc0a2371c527172cbf0036d0304f1b0de8c5b8024c6d8b723,PodSa
ndboxId:e8208b3f86310ea694958e5f442217478beedf18832379411401b8adb2d05378,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704755278978006164,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192738f9de0b7320290fd759ae2b29df,},Annotations:map[string]string{io.kubernetes.container.hash: 91f2ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daceaeb5405298e4892ff89a700934cabd970d174790f073435d8cdeceb096f,PodSandboxId:141ba8b0725546ad1903875a800bcf5084510d1
3eaea6c1e1954f4319b2cab03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704755277595921553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8f0f4a92624c25ea95ccaf396f8bb6,},Annotations:map[string]string{io.kubernetes.container.hash: 452d08b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c95a5b830fe11d8c9f398b33c06edea7f9e07593606cb121fdadf731a3565fb,PodSandboxId:5f1209063110200d8aaa31c7c86b6049fdabc7705f556
a6f8a6c6f7decc2058b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704755277278691565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d649cf3f76cec7672e8cf9d921ad20f3b0986db12a976f91fd7006c5ec62a7,PodSandboxId:32b2df694d9c28d
e8cd8fc158e74ed336d74a4d3a4ff5e48ea2db9a05f2eecbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704755277255069441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e82aff48-ea80-42f2-99ff-68dc30c581db name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.565324401Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=50bc5b0e-86b2-4dd3-b720-f56c19f55aae name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.565999365Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:19638132cacbec7b7463a50634e6a59b534149947db5191bc786984e94b640ed,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-2gzzn,Uid:e4384d9d-020a-4c0c-8893-f778e4b2815a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755513236825061,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-2gzzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4384d9d-020a-4c0c-8893-f778e4b2815a,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:11:52.880994366Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c76d1dc9275a6c23d77ab232e01ab868a06ef524885640c774dc7a04eccaeaa,Metadata:&PodSandboxMetadata{Name:nginx,Uid:a359bd4b-8452-4dd0-914f-38e7a1fc5591,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755370438924022,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a359bd4b-8452-4dd0-914f-38e7a1fc5591,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:09:30.097298202Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ce11d9240abd253415ed4ad6a678ea2f3a49410fbf55601f847b351b36d9643,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-59rrn,Uid:de4d1faf-86b2-4232-92de-a596c9d59f89,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704755350183923575,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-59rrn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.
uid: de4d1faf-86b2-4232-92de-a596c9d59f89,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:09:02.341744484Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:38a7eed4c9f798e607a9a7714250672e25e69b2951d7452bb40a069feec52e65,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-rvbr6,Uid:aaae065a-0358-4ca2-b96e-a0b84c2bcf8d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704755344306015775,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 38be4ee8-8133-4514-91c5-6ea489b8c09f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-rvbr6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aaae065a-0358-4ca2-b96e-a0b84c2bcf8d,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:09:02.45773065
5Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:003dcf4a1bed92072d0636fd343a357bca319fb0677bf4aed17f8dced5964bb6,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-q7twx,Uid:b59fd534-0e4b-45c3-80dd-57e6edf44860,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704755344254715036,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: ef2932f0-76e9-4aff-a334-6c3fb83f36a2,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-q7twx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b59fd534-0e4b-45c3-80dd-57e6edf44860,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:09:02.402666839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c20e0e183247953cc390fef1fc5b843c0ea10d2ce885c0735a1eab9d232dd0e,Metadata:&PodSandbox
Metadata{Name:storage-provisioner,Uid:d496ec5d-4746-41a3-bf86-0f0174ae521c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755303799467568,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496ec5d-4746-41a3-bf86-0f0174ae521c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]
}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T23:08:23.451258397Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d592403826fd94b21795f4e78cdaf90d14f1a8a11d2ad9fc470d369236711e4b,Metadata:&PodSandboxMetadata{Name:kube-proxy-hq95h,Uid:9a6bfeaa-c475-407a-ae60-dbbbababae82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755301912785870,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hq95h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a6bfeaa-c475-407a-ae60-dbbbababae82,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:08:21.539615364Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97d398544cad87641ec253769d834a483781c13da75c45b15f
7deb5122a80259,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-4px8c,Uid:75f49cb3-f576-4eff-977f-7aa21a8c5810,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755301885203819,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bff467f8-4px8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f49cb3-f576-4eff-977f-7aa21a8c5810,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:08:21.530689350Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8208b3f86310ea694958e5f442217478beedf18832379411401b8adb2d05378,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-132808,Uid:192738f9de0b7320290fd759ae2b29df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755276776655372,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-132808,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 192738f9de0b7320290fd759ae2b29df,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.117:2379,kubernetes.io/config.hash: 192738f9de0b7320290fd759ae2b29df,kubernetes.io/config.seen: 2024-01-08T23:07:55.602241218Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5f1209063110200d8aaa31c7c86b6049fdabc7705f556a6f8a6c6f7decc2058b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-132808,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755276750086946,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e2
7827b1f8d737725,kubernetes.io/config.seen: 2024-01-08T23:07:55.598798362Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:141ba8b0725546ad1903875a800bcf5084510d13eaea6c1e1954f4319b2cab03,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-132808,Uid:0c8f0f4a92624c25ea95ccaf396f8bb6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755276744144214,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8f0f4a92624c25ea95ccaf396f8bb6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.117:8443,kubernetes.io/config.hash: 0c8f0f4a92624c25ea95ccaf396f8bb6,kubernetes.io/config.seen: 2024-01-08T23:07:55.597693824Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32b2df694d9c28de8cd8fc158e74ed336d74a4d3a4ff5
e48ea2db9a05f2eecbb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-132808,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704755276641783896,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2024-01-08T23:07:55.600591886Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=50bc5b0e-86b2-4dd3-b720-f56c19f55aae name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.566823508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4599770c-badd-4909-aa59-3762e8543d75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.566923715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4599770c-badd-4909-aa59-3762e8543d75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.567189884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5766b79193bed5f818df2a814765119a527f76e8c34071709c008a7abca169b0,PodSandboxId:19638132cacbec7b7463a50634e6a59b534149947db5191bc786984e94b640ed,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704755515907149268,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-2gzzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4384d9d-020a-4c0c-8893-f778e4b2815a,},Annotations:map[string]string{io.kubernetes.container.hash: e0a92b21,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b753d042355cb38061452d8c683397578447b8d59e70e3baf01f437b1d8f2f,PodSandboxId:5c76d1dc9275a6c23d77ab232e01ab868a06ef524885640c774dc7a04eccaeaa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704755373971929517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a359bd4b-8452-4dd0-914f-38e7a1fc5591,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 40daaad,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70a480bc7a3cf72e1563349fe51fd9a10699f04ff4f8cc19aae139562179673,PodSandboxId:1ce11d9240abd253415ed4ad6a678ea2f3a49410fbf55601f847b351b36d9643,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704755357770989303,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-59rrn,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de4d1faf-86b2-4232-92de-a596c9d59f89,},Annotations:map[string]string{io.kubernetes.container.hash: ef08225,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d06c1b3c444a9695f91d019326e1354f28e05d1caf84d1e14ee835fcf7f399c3,PodSandboxId:38a7eed4c9f798e607a9a7714250672e25e69b2951d7452bb40a069feec52e65,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea5
8fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347600779423,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rvbr6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aaae065a-0358-4ca2-b96e-a0b84c2bcf8d,},Annotations:map[string]string{io.kubernetes.container.hash: a70eb15a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ba62768c5822d1faec9102a4c3f4ec70f2eedaee7ca63cefee9f99e738701a,PodSandboxId:003dcf4a1bed92072d0636fd343a357bca319fb0677bf4aed17f8dced5964bb6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-c
ertgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347105575301,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q7twx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b59fd534-0e4b-45c3-80dd-57e6edf44860,},Annotations:map[string]string{io.kubernetes.container.hash: bff672ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8440de4e23da446cb7725985789e5036fc7742add70f42bf0a8742591fbe46,PodSandboxId:2c20e0e183247953cc390fef1fc5b843c0ea10d2ce885c0735a1eab9d232dd0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:
&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755304167900681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496ec5d-4746-41a3-bf86-0f0174ae521c,},Annotations:map[string]string{io.kubernetes.container.hash: 595c420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fe4d55df168b3b406ef0205950a52a47de6d24bd77cc8d7701d4d528c4d0dd,PodSandboxId:d592403826fd94b21795f4e78cdaf90d14f1a8a11d2ad9fc470d369236711e4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704755303675931448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hq95h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a6bfeaa-c475-407a-ae60-dbbbababae82,},Annotations:map[string]string{io.kubernetes.container.hash: ad5abff1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66bd689a24ad3e1755446c918e5c09a0f9de3104f07bead63d1498e997ff31c3,PodSandboxId:97d398544cad87641ec253769d834a483781c13da75c45b15f7deb5122a80259,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704755302413495950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4px8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f49cb3-f576-4eff-977f-7aa21a8c5810,},Annotations:map[string]string{io.kubernetes.container.hash: 67b748ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e382a1e2026f80efc0a2371c527172cbf0036d0304f1b0de8c5b8024c6d8b723,PodSa
ndboxId:e8208b3f86310ea694958e5f442217478beedf18832379411401b8adb2d05378,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704755278978006164,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192738f9de0b7320290fd759ae2b29df,},Annotations:map[string]string{io.kubernetes.container.hash: 91f2ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daceaeb5405298e4892ff89a700934cabd970d174790f073435d8cdeceb096f,PodSandboxId:141ba8b0725546ad1903875a800bcf5084510d1
3eaea6c1e1954f4319b2cab03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704755277595921553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8f0f4a92624c25ea95ccaf396f8bb6,},Annotations:map[string]string{io.kubernetes.container.hash: 452d08b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c95a5b830fe11d8c9f398b33c06edea7f9e07593606cb121fdadf731a3565fb,PodSandboxId:5f1209063110200d8aaa31c7c86b6049fdabc7705f556
a6f8a6c6f7decc2058b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704755277278691565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d649cf3f76cec7672e8cf9d921ad20f3b0986db12a976f91fd7006c5ec62a7,PodSandboxId:32b2df694d9c28d
e8cd8fc158e74ed336d74a4d3a4ff5e48ea2db9a05f2eecbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704755277255069441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4599770c-badd-4909-aa59-3762e8543d75 name=/runtime.v1alpha2.Runtime
Service/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.594777660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=301ec94b-4566-4fc4-adcb-eae4947a89e1 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.594865511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=301ec94b-4566-4fc4-adcb-eae4947a89e1 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.596613810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=09170dc2-fadb-42ed-a003-96d5adb5a1ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.597087413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755535597075096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=09170dc2-fadb-42ed-a003-96d5adb5a1ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.597756795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f61fee86-f9eb-4bfb-a785-0581bc05657b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.597830071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f61fee86-f9eb-4bfb-a785-0581bc05657b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.598116192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5766b79193bed5f818df2a814765119a527f76e8c34071709c008a7abca169b0,PodSandboxId:19638132cacbec7b7463a50634e6a59b534149947db5191bc786984e94b640ed,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704755515907149268,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-2gzzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4384d9d-020a-4c0c-8893-f778e4b2815a,},Annotations:map[string]string{io.kubernetes.container.hash: e0a92b21,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b753d042355cb38061452d8c683397578447b8d59e70e3baf01f437b1d8f2f,PodSandboxId:5c76d1dc9275a6c23d77ab232e01ab868a06ef524885640c774dc7a04eccaeaa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704755373971929517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a359bd4b-8452-4dd0-914f-38e7a1fc5591,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 40daaad,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70a480bc7a3cf72e1563349fe51fd9a10699f04ff4f8cc19aae139562179673,PodSandboxId:1ce11d9240abd253415ed4ad6a678ea2f3a49410fbf55601f847b351b36d9643,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704755357770989303,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-59rrn,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de4d1faf-86b2-4232-92de-a596c9d59f89,},Annotations:map[string]string{io.kubernetes.container.hash: ef08225,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d06c1b3c444a9695f91d019326e1354f28e05d1caf84d1e14ee835fcf7f399c3,PodSandboxId:38a7eed4c9f798e607a9a7714250672e25e69b2951d7452bb40a069feec52e65,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea5
8fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347600779423,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rvbr6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aaae065a-0358-4ca2-b96e-a0b84c2bcf8d,},Annotations:map[string]string{io.kubernetes.container.hash: a70eb15a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ba62768c5822d1faec9102a4c3f4ec70f2eedaee7ca63cefee9f99e738701a,PodSandboxId:003dcf4a1bed92072d0636fd343a357bca319fb0677bf4aed17f8dced5964bb6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-c
ertgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347105575301,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q7twx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b59fd534-0e4b-45c3-80dd-57e6edf44860,},Annotations:map[string]string{io.kubernetes.container.hash: bff672ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8440de4e23da446cb7725985789e5036fc7742add70f42bf0a8742591fbe46,PodSandboxId:2c20e0e183247953cc390fef1fc5b843c0ea10d2ce885c0735a1eab9d232dd0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:
&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755304167900681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496ec5d-4746-41a3-bf86-0f0174ae521c,},Annotations:map[string]string{io.kubernetes.container.hash: 595c420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fe4d55df168b3b406ef0205950a52a47de6d24bd77cc8d7701d4d528c4d0dd,PodSandboxId:d592403826fd94b21795f4e78cdaf90d14f1a8a11d2ad9fc470d369236711e4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704755303675931448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hq95h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a6bfeaa-c475-407a-ae60-dbbbababae82,},Annotations:map[string]string{io.kubernetes.container.hash: ad5abff1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66bd689a24ad3e1755446c918e5c09a0f9de3104f07bead63d1498e997ff31c3,PodSandboxId:97d398544cad87641ec253769d834a483781c13da75c45b15f7deb5122a80259,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704755302413495950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4px8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f49cb3-f576-4eff-977f-7aa21a8c5810,},Annotations:map[string]string{io.kubernetes.container.hash: 67b748ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e382a1e2026f80efc0a2371c527172cbf0036d0304f1b0de8c5b8024c6d8b723,PodSa
ndboxId:e8208b3f86310ea694958e5f442217478beedf18832379411401b8adb2d05378,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704755278978006164,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192738f9de0b7320290fd759ae2b29df,},Annotations:map[string]string{io.kubernetes.container.hash: 91f2ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daceaeb5405298e4892ff89a700934cabd970d174790f073435d8cdeceb096f,PodSandboxId:141ba8b0725546ad1903875a800bcf5084510d1
3eaea6c1e1954f4319b2cab03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704755277595921553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8f0f4a92624c25ea95ccaf396f8bb6,},Annotations:map[string]string{io.kubernetes.container.hash: 452d08b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c95a5b830fe11d8c9f398b33c06edea7f9e07593606cb121fdadf731a3565fb,PodSandboxId:5f1209063110200d8aaa31c7c86b6049fdabc7705f556
a6f8a6c6f7decc2058b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704755277278691565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d649cf3f76cec7672e8cf9d921ad20f3b0986db12a976f91fd7006c5ec62a7,PodSandboxId:32b2df694d9c28d
e8cd8fc158e74ed336d74a4d3a4ff5e48ea2db9a05f2eecbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704755277255069441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f61fee86-f9eb-4bfb-a785-0581bc05657b name=/runtime.v1.RuntimeServic
e/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.640170689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=988ea46a-d901-460a-afd8-9616b166ba3b name=/runtime.v1.RuntimeService/Version
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.640254588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=988ea46a-d901-460a-afd8-9616b166ba3b name=/runtime.v1.RuntimeService/Version
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.641769338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3892b8f8-2292-491b-a956-1447dd7f06ff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.642249931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755535642236021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=3892b8f8-2292-491b-a956-1447dd7f06ff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.642981725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2137a096-b61a-4fec-9049-f600897786e4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.643028548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2137a096-b61a-4fec-9049-f600897786e4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:12:15 ingress-addon-legacy-132808 crio[717]: time="2024-01-08 23:12:15.643895449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5766b79193bed5f818df2a814765119a527f76e8c34071709c008a7abca169b0,PodSandboxId:19638132cacbec7b7463a50634e6a59b534149947db5191bc786984e94b640ed,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704755515907149268,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-2gzzn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4384d9d-020a-4c0c-8893-f778e4b2815a,},Annotations:map[string]string{io.kubernetes.container.hash: e0a92b21,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b753d042355cb38061452d8c683397578447b8d59e70e3baf01f437b1d8f2f,PodSandboxId:5c76d1dc9275a6c23d77ab232e01ab868a06ef524885640c774dc7a04eccaeaa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704755373971929517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a359bd4b-8452-4dd0-914f-38e7a1fc5591,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 40daaad,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c70a480bc7a3cf72e1563349fe51fd9a10699f04ff4f8cc19aae139562179673,PodSandboxId:1ce11d9240abd253415ed4ad6a678ea2f3a49410fbf55601f847b351b36d9643,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704755357770989303,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-59rrn,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de4d1faf-86b2-4232-92de-a596c9d59f89,},Annotations:map[string]string{io.kubernetes.container.hash: ef08225,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d06c1b3c444a9695f91d019326e1354f28e05d1caf84d1e14ee835fcf7f399c3,PodSandboxId:38a7eed4c9f798e607a9a7714250672e25e69b2951d7452bb40a069feec52e65,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea5
8fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347600779423,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rvbr6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aaae065a-0358-4ca2-b96e-a0b84c2bcf8d,},Annotations:map[string]string{io.kubernetes.container.hash: a70eb15a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ba62768c5822d1faec9102a4c3f4ec70f2eedaee7ca63cefee9f99e738701a,PodSandboxId:003dcf4a1bed92072d0636fd343a357bca319fb0677bf4aed17f8dced5964bb6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-c
ertgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704755347105575301,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-q7twx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b59fd534-0e4b-45c3-80dd-57e6edf44860,},Annotations:map[string]string{io.kubernetes.container.hash: bff672ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8440de4e23da446cb7725985789e5036fc7742add70f42bf0a8742591fbe46,PodSandboxId:2c20e0e183247953cc390fef1fc5b843c0ea10d2ce885c0735a1eab9d232dd0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:
&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755304167900681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d496ec5d-4746-41a3-bf86-0f0174ae521c,},Annotations:map[string]string{io.kubernetes.container.hash: 595c420c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fe4d55df168b3b406ef0205950a52a47de6d24bd77cc8d7701d4d528c4d0dd,PodSandboxId:d592403826fd94b21795f4e78cdaf90d14f1a8a11d2ad9fc470d369236711e4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704755303675931448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hq95h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a6bfeaa-c475-407a-ae60-dbbbababae82,},Annotations:map[string]string{io.kubernetes.container.hash: ad5abff1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66bd689a24ad3e1755446c918e5c09a0f9de3104f07bead63d1498e997ff31c3,PodSandboxId:97d398544cad87641ec253769d834a483781c13da75c45b15f7deb5122a80259,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a
754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704755302413495950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-4px8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75f49cb3-f576-4eff-977f-7aa21a8c5810,},Annotations:map[string]string{io.kubernetes.container.hash: 67b748ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e382a1e2026f80efc0a2371c527172cbf0036d0304f1b0de8c5b8024c6d8b723,PodSa
ndboxId:e8208b3f86310ea694958e5f442217478beedf18832379411401b8adb2d05378,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704755278978006164,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 192738f9de0b7320290fd759ae2b29df,},Annotations:map[string]string{io.kubernetes.container.hash: 91f2ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daceaeb5405298e4892ff89a700934cabd970d174790f073435d8cdeceb096f,PodSandboxId:141ba8b0725546ad1903875a800bcf5084510d1
3eaea6c1e1954f4319b2cab03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704755277595921553,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8f0f4a92624c25ea95ccaf396f8bb6,},Annotations:map[string]string{io.kubernetes.container.hash: 452d08b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c95a5b830fe11d8c9f398b33c06edea7f9e07593606cb121fdadf731a3565fb,PodSandboxId:5f1209063110200d8aaa31c7c86b6049fdabc7705f556
a6f8a6c6f7decc2058b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704755277278691565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d649cf3f76cec7672e8cf9d921ad20f3b0986db12a976f91fd7006c5ec62a7,PodSandboxId:32b2df694d9c28d
e8cd8fc158e74ed336d74a4d3a4ff5e48ea2db9a05f2eecbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704755277255069441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-132808,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2137a096-b61a-4fec-9049-f600897786e4 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5766b79193bed       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            19 seconds ago      Running             hello-world-app           0                   19638132cacbe       hello-world-app-5f5d8b66bb-2gzzn
	08b753d042355       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   5c76d1dc9275a       nginx
	c70a480bc7a3c       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   1ce11d9240abd       ingress-nginx-controller-7fcf777cb7-59rrn
	d06c1b3c444a9       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   38a7eed4c9f79       ingress-nginx-admission-patch-rvbr6
	79ba62768c582       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   003dcf4a1bed9       ingress-nginx-admission-create-q7twx
	7e8440de4e23d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   2c20e0e183247       storage-provisioner
	b0fe4d55df168       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   d592403826fd9       kube-proxy-hq95h
	66bd689a24ad3       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   97d398544cad8       coredns-66bff467f8-4px8c
	e382a1e2026f8       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   e8208b3f86310       etcd-ingress-addon-legacy-132808
	9daceaeb54052       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   141ba8b072554       kube-apiserver-ingress-addon-legacy-132808
	7c95a5b830fe1       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   5f12090631102       kube-controller-manager-ingress-addon-legacy-132808
	72d649cf3f76c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   32b2df694d9c2       kube-scheduler-ingress-addon-legacy-132808
	
	
	==> coredns [66bd689a24ad3e1755446c918e5c09a0f9de3104f07bead63d1498e997ff31c3] <==
	[INFO] 10.244.0.6:47008 - 904 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073807s
	[INFO] 10.244.0.6:47008 - 11335 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068577s
	[INFO] 10.244.0.6:47008 - 45120 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070373s
	[INFO] 10.244.0.6:47008 - 29825 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110805s
	[INFO] 10.244.0.6:59707 - 20278 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000094121s
	[INFO] 10.244.0.6:59707 - 3840 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061877s
	[INFO] 10.244.0.6:59707 - 29015 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069276s
	[INFO] 10.244.0.6:59707 - 11397 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089162s
	[INFO] 10.244.0.6:59707 - 16825 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006502s
	[INFO] 10.244.0.6:59707 - 57398 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125358s
	[INFO] 10.244.0.6:59707 - 48069 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000118626s
	[INFO] 10.244.0.6:33594 - 9028 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085644s
	[INFO] 10.244.0.6:33594 - 51996 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064585s
	[INFO] 10.244.0.6:33594 - 26515 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000116619s
	[INFO] 10.244.0.6:33594 - 25812 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004128s
	[INFO] 10.244.0.6:33594 - 53872 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00009728s
	[INFO] 10.244.0.6:56459 - 62447 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067617s
	[INFO] 10.244.0.6:33594 - 40911 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053871s
	[INFO] 10.244.0.6:33594 - 28845 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060999s
	[INFO] 10.244.0.6:56459 - 49233 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.002272923s
	[INFO] 10.244.0.6:56459 - 3771 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000490713s
	[INFO] 10.244.0.6:56459 - 29797 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065467s
	[INFO] 10.244.0.6:56459 - 37175 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065997s
	[INFO] 10.244.0.6:56459 - 44775 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064176s
	[INFO] 10.244.0.6:56459 - 64316 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000089294s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-132808
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-132808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=ingress-addon-legacy-132808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T23_08_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:08:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-132808
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:12:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:12:07 +0000   Mon, 08 Jan 2024 23:07:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:12:07 +0000   Mon, 08 Jan 2024 23:07:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:12:07 +0000   Mon, 08 Jan 2024 23:07:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:12:07 +0000   Mon, 08 Jan 2024 23:08:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    ingress-addon-legacy-132808
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ab4509646c640009b431e189b5455cc
	  System UUID:                7ab45096-46c6-4000-9b43-1e189b5455cc
	  Boot ID:                    0d8571b2-0ea9-430c-8a90-79fcea6afb98
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-2gzzn                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-4px8c                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m54s
	  kube-system                 etcd-ingress-addon-legacy-132808                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-apiserver-ingress-addon-legacy-132808             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-132808    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-hq95h                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ingress-addon-legacy-132808             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m20s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s (x5 over 4m20s)  kubelet     Node ingress-addon-legacy-132808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x6 over 4m20s)  kubelet     Node ingress-addon-legacy-132808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x5 over 4m20s)  kubelet     Node ingress-addon-legacy-132808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node ingress-addon-legacy-132808 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node ingress-addon-legacy-132808 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node ingress-addon-legacy-132808 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m58s                  kubelet     Node ingress-addon-legacy-132808 status is now: NodeReady
	  Normal  Starting                 3m52s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 8 23:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.097865] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.608572] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.780115] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149905] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.091786] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.852481] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.116231] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.159095] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.122972] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.242481] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +8.334336] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +2.954541] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 8 23:08] systemd-fstab-generator[1429]: Ignoring "noauto" for root device
	[ +15.636878] kauditd_printk_skb: 6 callbacks suppressed
	[ +37.651856] kauditd_printk_skb: 20 callbacks suppressed
	[Jan 8 23:09] kauditd_printk_skb: 6 callbacks suppressed
	[ +22.296667] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.248786] kauditd_printk_skb: 3 callbacks suppressed
	[Jan 8 23:12] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [e382a1e2026f80efc0a2371c527172cbf0036d0304f1b0de8c5b8024c6d8b723] <==
	2024-01-08 23:07:59.104978 W | auth: simple token is not cryptographically signed
	2024-01-08 23:07:59.110305 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2024/01/08 23:07:59 INFO: d85ef093c7464643 switched to configuration voters=(15591163477497366083)
	2024-01-08 23:07:59.111668 I | etcdserver/membership: added member d85ef093c7464643 [https://192.168.39.117:2380] to cluster 44831ab0f42e7be7
	2024-01-08 23:07:59.111804 I | etcdserver: d85ef093c7464643 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-08 23:07:59.113236 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 23:07:59.113444 I | embed: listening for peers on 192.168.39.117:2380
	2024-01-08 23:07:59.113619 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/08 23:07:59 INFO: d85ef093c7464643 is starting a new election at term 1
	raft2024/01/08 23:07:59 INFO: d85ef093c7464643 became candidate at term 2
	raft2024/01/08 23:07:59 INFO: d85ef093c7464643 received MsgVoteResp from d85ef093c7464643 at term 2
	raft2024/01/08 23:07:59 INFO: d85ef093c7464643 became leader at term 2
	raft2024/01/08 23:07:59 INFO: raft.node: d85ef093c7464643 elected leader d85ef093c7464643 at term 2
	2024-01-08 23:07:59.396664 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 23:07:59.398493 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 23:07:59.398837 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 23:07:59.398892 I | etcdserver: published {Name:ingress-addon-legacy-132808 ClientURLs:[https://192.168.39.117:2379]} to cluster 44831ab0f42e7be7
	2024-01-08 23:07:59.398910 I | embed: ready to serve client requests
	2024-01-08 23:07:59.399187 I | embed: ready to serve client requests
	2024-01-08 23:07:59.400955 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 23:07:59.404612 I | embed: serving client requests on 192.168.39.117:2379
	2024-01-08 23:08:21.257528 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (330.274738ms) to execute
	2024-01-08 23:08:21.257668 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replication-controller\" " with result "range_response_count:1 size:212" took too long (490.189989ms) to execute
	2024-01-08 23:09:15.053067 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (434.840941ms) to execute
	2024-01-08 23:09:15.054328 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (401.354059ms) to execute
	
	
	==> kernel <==
	 23:12:15 up 4 min,  0 users,  load average: 0.69, 0.43, 0.19
	Linux ingress-addon-legacy-132808 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9daceaeb5405298e4892ff89a700934cabd970d174790f073435d8cdeceb096f] <==
	E0108 23:08:02.945192       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.117, ResourceVersion: 0, AdditionalErrorMsg: 
	I0108 23:08:03.063349       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 23:08:03.064282       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 23:08:03.064989       1 cache.go:39] Caches are synced for autoregister controller
	I0108 23:08:03.069925       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 23:08:03.069984       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 23:08:03.862041       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 23:08:03.862165       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 23:08:03.869341       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 23:08:03.876795       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 23:08:03.876845       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 23:08:04.508944       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 23:08:04.558847       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 23:08:04.667449       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.117]
	I0108 23:08:04.668892       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 23:08:04.680346       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 23:08:05.238691       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 23:08:06.461233       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 23:08:06.575844       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 23:08:06.921089       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 23:08:21.377361       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 23:08:21.410060       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 23:09:02.318198       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 23:09:29.912619       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0108 23:12:08.087462       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [7c95a5b830fe11d8c9f398b33c06edea7f9e07593606cb121fdadf731a3565fb] <==
	I0108 23:08:21.517303       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"23c836ee-2b60-4fe4-b497-d16c8768ec4f", APIVersion:"apps/v1", ResourceVersion:"227", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-hq95h
	I0108 23:08:21.614286       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0108 23:08:21.625153       1 shared_informer.go:230] Caches are synced for namespace 
	I0108 23:08:21.715290       1 shared_informer.go:230] Caches are synced for service account 
	I0108 23:08:21.728354       1 shared_informer.go:230] Caches are synced for stateful set 
	I0108 23:08:21.815058       1 shared_informer.go:230] Caches are synced for HPA 
	I0108 23:08:21.913866       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 23:08:21.914102       1 shared_informer.go:230] Caches are synced for disruption 
	I0108 23:08:21.914132       1 disruption.go:339] Sending events to api server.
	I0108 23:08:21.914990       1 shared_informer.go:230] Caches are synced for attach detach 
	I0108 23:08:21.968786       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0108 23:08:21.968923       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 23:08:21.969444       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 23:08:21.969490       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 23:08:21.970498       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 23:08:22.325912       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e18d97b5-59e5-4e2b-a69e-00ac52aa2041", APIVersion:"apps/v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0108 23:08:22.423833       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"81d297b1-f04e-44f2-91c5-8038db46cbe9", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-4xk8h
	I0108 23:09:02.290091       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"79f0b82c-cacc-456e-ab71-753638ccefbe", APIVersion:"apps/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 23:09:02.308986       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9378e73e-6511-4130-b43b-967c9cbd88b7", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-59rrn
	I0108 23:09:02.361740       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ef2932f0-76e9-4aff-a334-6c3fb83f36a2", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-q7twx
	I0108 23:09:02.450678       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"38be4ee8-8133-4514-91c5-6ea489b8c09f", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-rvbr6
	I0108 23:09:07.347645       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"ef2932f0-76e9-4aff-a334-6c3fb83f36a2", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 23:09:08.353284       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"38be4ee8-8133-4514-91c5-6ea489b8c09f", APIVersion:"batch/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 23:11:52.836292       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"fa3f9ee0-4a3f-440f-84af-8bcd59731b39", APIVersion:"apps/v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 23:11:52.869371       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"76af1ec0-23bc-406f-9865-6df87b84972b", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-2gzzn
	
	
	==> kube-proxy [b0fe4d55df168b3b406ef0205950a52a47de6d24bd77cc8d7701d4d528c4d0dd] <==
	W0108 23:08:23.915746       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 23:08:23.925065       1 node.go:136] Successfully retrieved node IP: 192.168.39.117
	I0108 23:08:23.925121       1 server_others.go:186] Using iptables Proxier.
	I0108 23:08:23.925334       1 server.go:583] Version: v1.18.20
	I0108 23:08:23.930156       1 config.go:315] Starting service config controller
	I0108 23:08:23.930196       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 23:08:23.930218       1 config.go:133] Starting endpoints config controller
	I0108 23:08:23.930241       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 23:08:24.030530       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0108 23:08:24.030606       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [72d649cf3f76cec7672e8cf9d921ad20f3b0986db12a976f91fd7006c5ec62a7] <==
	I0108 23:08:02.993071       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 23:08:02.993123       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 23:08:02.994050       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 23:08:02.995532       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 23:08:03.007172       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 23:08:03.014094       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 23:08:03.014222       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:08:03.014973       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 23:08:03.015094       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 23:08:03.017998       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 23:08:03.018242       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:08:03.019947       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 23:08:03.020045       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 23:08:03.015509       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 23:08:03.020279       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 23:08:03.021554       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 23:08:03.863603       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 23:08:03.899655       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 23:08:04.038189       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:08:04.041069       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 23:08:04.049205       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 23:08:04.186792       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:08:04.212965       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 23:08:04.255161       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0108 23:08:06.694144       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 23:07:30 UTC, ends at Mon 2024-01-08 23:12:16 UTC. --
	Jan 08 23:09:08 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:08.342925    1436 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d06c1b3c444a9695f91d019326e1354f28e05d1caf84d1e14ee835fcf7f399c3
	Jan 08 23:09:09 ingress-addon-legacy-132808 kubelet[1436]: W0108 23:09:09.347952    1436 pod_container_deletor.go:77] Container "38a7eed4c9f798e607a9a7714250672e25e69b2951d7452bb40a069feec52e65" not found in pod's containers
	Jan 08 23:09:09 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:09.519832    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-admission-token-x5ns9" (UniqueName: "kubernetes.io/secret/aaae065a-0358-4ca2-b96e-a0b84c2bcf8d-ingress-nginx-admission-token-x5ns9") pod "aaae065a-0358-4ca2-b96e-a0b84c2bcf8d" (UID: "aaae065a-0358-4ca2-b96e-a0b84c2bcf8d")
	Jan 08 23:09:09 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:09.525272    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaae065a-0358-4ca2-b96e-a0b84c2bcf8d-ingress-nginx-admission-token-x5ns9" (OuterVolumeSpecName: "ingress-nginx-admission-token-x5ns9") pod "aaae065a-0358-4ca2-b96e-a0b84c2bcf8d" (UID: "aaae065a-0358-4ca2-b96e-a0b84c2bcf8d"). InnerVolumeSpecName "ingress-nginx-admission-token-x5ns9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:09:09 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:09.620315    1436 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-x5ns9" (UniqueName: "kubernetes.io/secret/aaae065a-0358-4ca2-b96e-a0b84c2bcf8d-ingress-nginx-admission-token-x5ns9") on node "ingress-addon-legacy-132808" DevicePath ""
	Jan 08 23:09:19 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:19.612216    1436 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 23:09:19 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:19.754755    1436 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-cvngr" (UniqueName: "kubernetes.io/secret/3861395a-6417-45ba-b1ca-08606fb4c48c-minikube-ingress-dns-token-cvngr") pod "kube-ingress-dns-minikube" (UID: "3861395a-6417-45ba-b1ca-08606fb4c48c")
	Jan 08 23:09:30 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:30.097665    1436 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 23:09:30 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:09:30.288993    1436 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jjm29" (UniqueName: "kubernetes.io/secret/a359bd4b-8452-4dd0-914f-38e7a1fc5591-default-token-jjm29") pod "nginx" (UID: "a359bd4b-8452-4dd0-914f-38e7a1fc5591")
	Jan 08 23:11:52 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:11:52.881281    1436 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 23:11:52 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:11:52.953018    1436 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jjm29" (UniqueName: "kubernetes.io/secret/e4384d9d-020a-4c0c-8893-f778e4b2815a-default-token-jjm29") pod "hello-world-app-5f5d8b66bb-2gzzn" (UID: "e4384d9d-020a-4c0c-8893-f778e4b2815a")
	Jan 08 23:11:54 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:11:54.421768    1436 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5974ea1455dab3aecf939ba8290dfb5158d2ed97dd21a49f4e75f9ddc6fdefb5
	Jan 08 23:11:55 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:11:55.562564    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-cvngr" (UniqueName: "kubernetes.io/secret/3861395a-6417-45ba-b1ca-08606fb4c48c-minikube-ingress-dns-token-cvngr") pod "3861395a-6417-45ba-b1ca-08606fb4c48c" (UID: "3861395a-6417-45ba-b1ca-08606fb4c48c")
	Jan 08 23:11:55 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:11:55.567042    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3861395a-6417-45ba-b1ca-08606fb4c48c-minikube-ingress-dns-token-cvngr" (OuterVolumeSpecName: "minikube-ingress-dns-token-cvngr") pod "3861395a-6417-45ba-b1ca-08606fb4c48c" (UID: "3861395a-6417-45ba-b1ca-08606fb4c48c"). InnerVolumeSpecName "minikube-ingress-dns-token-cvngr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:11:55 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:11:55.663115    1436 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-cvngr" (UniqueName: "kubernetes.io/secret/3861395a-6417-45ba-b1ca-08606fb4c48c-minikube-ingress-dns-token-cvngr") on node "ingress-addon-legacy-132808" DevicePath ""
	Jan 08 23:12:08 ingress-addon-legacy-132808 kubelet[1436]: E0108 23:12:08.068723    1436 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-59rrn.17a8821edf150a73", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-59rrn", UID:"de4d1faf-86b2-4232-92de-a596c9d59f89", APIVersion:"v1", ResourceVersion:"490", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-132808"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f3e3203c23a73, ext:241648951121, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f3e3203c23a73, ext:241648951121, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-59rrn.17a8821edf150a73" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 23:12:08 ingress-addon-legacy-132808 kubelet[1436]: E0108 23:12:08.083093    1436 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-59rrn.17a8821edf150a73", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-59rrn", UID:"de4d1faf-86b2-4232-92de-a596c9d59f89", APIVersion:"v1", ResourceVersion:"490", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-132808"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f3e3203c23a73, ext:241648951121, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f3e3204971455, ext:241662900525, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-59rrn.17a8821edf150a73" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 23:12:10 ingress-addon-legacy-132808 kubelet[1436]: W0108 23:12:10.551029    1436 pod_container_deletor.go:77] Container "1ce11d9240abd253415ed4ad6a678ea2f3a49410fbf55601f847b351b36d9643" not found in pod's containers
	Jan 08 23:12:12 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:12:12.216759    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-7w2fq" (UniqueName: "kubernetes.io/secret/de4d1faf-86b2-4232-92de-a596c9d59f89-ingress-nginx-token-7w2fq") pod "de4d1faf-86b2-4232-92de-a596c9d59f89" (UID: "de4d1faf-86b2-4232-92de-a596c9d59f89")
	Jan 08 23:12:12 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:12:12.216808    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/de4d1faf-86b2-4232-92de-a596c9d59f89-webhook-cert") pod "de4d1faf-86b2-4232-92de-a596c9d59f89" (UID: "de4d1faf-86b2-4232-92de-a596c9d59f89")
	Jan 08 23:12:12 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:12:12.222450    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4d1faf-86b2-4232-92de-a596c9d59f89-ingress-nginx-token-7w2fq" (OuterVolumeSpecName: "ingress-nginx-token-7w2fq") pod "de4d1faf-86b2-4232-92de-a596c9d59f89" (UID: "de4d1faf-86b2-4232-92de-a596c9d59f89"). InnerVolumeSpecName "ingress-nginx-token-7w2fq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:12:12 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:12:12.222745    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4d1faf-86b2-4232-92de-a596c9d59f89-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "de4d1faf-86b2-4232-92de-a596c9d59f89" (UID: "de4d1faf-86b2-4232-92de-a596c9d59f89"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 23:12:12 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:12:12.317193    1436 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/de4d1faf-86b2-4232-92de-a596c9d59f89-webhook-cert") on node "ingress-addon-legacy-132808" DevicePath ""
	Jan 08 23:12:12 ingress-addon-legacy-132808 kubelet[1436]: I0108 23:12:12.317247    1436 reconciler.go:319] Volume detached for volume "ingress-nginx-token-7w2fq" (UniqueName: "kubernetes.io/secret/de4d1faf-86b2-4232-92de-a596c9d59f89-ingress-nginx-token-7w2fq") on node "ingress-addon-legacy-132808" DevicePath ""
	Jan 08 23:12:13 ingress-addon-legacy-132808 kubelet[1436]: W0108 23:12:13.102931    1436 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/de4d1faf-86b2-4232-92de-a596c9d59f89/volumes" does not exist
	
	
	==> storage-provisioner [7e8440de4e23da446cb7725985789e5036fc7742add70f42bf0a8742591fbe46] <==
	I0108 23:08:24.312760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 23:08:24.324115       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 23:08:24.324209       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 23:08:24.333531       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 23:08:24.333722       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-132808_67fb4331-b4ba-462b-8897-a77e6f2d1d41!
	I0108 23:08:24.334701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1c50366-8969-4260-a776-1b70abb2aaad", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-132808_67fb4331-b4ba-462b-8897-a77e6f2d1d41 became leader
	I0108 23:08:24.434322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-132808_67fb4331-b4ba-462b-8897-a77e6f2d1d41!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-132808 -n ingress-addon-legacy-132808
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-132808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-nl6pn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-nl6pn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-nl6pn -- sh -c "ping -c 1 192.168.39.1": exit status 1 (196.12839ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-nl6pn): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-wz22p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-wz22p -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-wz22p -- sh -c "ping -c 1 192.168.39.1": exit status 1 (192.832966ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-wz22p): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-266395 -n multinode-266395
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-266395 logs -n 25: (1.308235967s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-169147 ssh -- ls                    | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:16 UTC | 08 Jan 24 23:16 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-169147 ssh --                       | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:16 UTC | 08 Jan 24 23:16 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-169147                           | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:16 UTC | 08 Jan 24 23:16 UTC |
	| start   | -p mount-start-2-169147                           | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:16 UTC | 08 Jan 24 23:17 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:17 UTC |                     |
	|         | --profile mount-start-2-169147                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-169147 ssh -- ls                    | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:17 UTC | 08 Jan 24 23:17 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-169147 ssh --                       | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:17 UTC | 08 Jan 24 23:17 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-169147                           | mount-start-2-169147 | jenkins | v1.32.0 | 08 Jan 24 23:17 UTC | 08 Jan 24 23:17 UTC |
	| delete  | -p mount-start-1-152567                           | mount-start-1-152567 | jenkins | v1.32.0 | 08 Jan 24 23:17 UTC | 08 Jan 24 23:17 UTC |
	| start   | -p multinode-266395                               | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:17 UTC | 08 Jan 24 23:19 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- apply -f                   | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- rollout                    | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- get pods -o                | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- get pods -o                | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-nl6pn --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-wz22p --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-nl6pn --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-wz22p --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-nl6pn -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-wz22p -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- get pods -o                | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-nl6pn                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC |                     |
	|         | busybox-5bc68d56bd-nl6pn -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC | 08 Jan 24 23:19 UTC |
	|         | busybox-5bc68d56bd-wz22p                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-266395 -- exec                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:19 UTC |                     |
	|         | busybox-5bc68d56bd-wz22p -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 23:17:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 23:17:13.695888  420066 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:17:13.696135  420066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:17:13.696144  420066 out.go:309] Setting ErrFile to fd 2...
	I0108 23:17:13.696149  420066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:17:13.696347  420066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:17:13.696952  420066 out.go:303] Setting JSON to false
	I0108 23:17:13.697944  420066 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14360,"bootTime":1704741474,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:17:13.698009  420066 start.go:138] virtualization: kvm guest
	I0108 23:17:13.700381  420066 out.go:177] * [multinode-266395] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:17:13.701874  420066 notify.go:220] Checking for updates...
	I0108 23:17:13.701900  420066 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:17:13.703549  420066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:17:13.704962  420066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:17:13.706385  420066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:17:13.707581  420066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:17:13.708707  420066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:17:13.710152  420066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:17:13.746125  420066 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 23:17:13.747419  420066 start.go:298] selected driver: kvm2
	I0108 23:17:13.747431  420066 start.go:902] validating driver "kvm2" against <nil>
	I0108 23:17:13.747443  420066 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:17:13.748111  420066 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:17:13.748190  420066 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 23:17:13.763469  420066 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 23:17:13.763522  420066 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 23:17:13.763723  420066 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 23:17:13.763781  420066 cni.go:84] Creating CNI manager for ""
	I0108 23:17:13.763793  420066 cni.go:136] 0 nodes found, recommending kindnet
	I0108 23:17:13.763800  420066 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 23:17:13.763811  420066 start_flags.go:323] config:
	{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:17:13.763936  420066 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:17:13.766596  420066 out.go:177] * Starting control plane node multinode-266395 in cluster multinode-266395
	I0108 23:17:13.768060  420066 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:17:13.768104  420066 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 23:17:13.768115  420066 cache.go:56] Caching tarball of preloaded images
	I0108 23:17:13.768207  420066 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:17:13.768219  420066 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:17:13.768540  420066 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:17:13.768565  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json: {Name:mkd2c757b8579d6c97f000365183918f4dad3eac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:13.768695  420066 start.go:365] acquiring machines lock for multinode-266395: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:17:13.768722  420066 start.go:369] acquired machines lock for "multinode-266395" in 15.087µs
	I0108 23:17:13.768741  420066 start.go:93] Provisioning new machine with config: &{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:17:13.768794  420066 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 23:17:13.770486  420066 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 23:17:13.770611  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:17:13.770645  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:17:13.784669  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0108 23:17:13.785043  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:17:13.785591  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:17:13.785613  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:17:13.785923  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:17:13.786099  420066 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:17:13.786262  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:13.786398  420066 start.go:159] libmachine.API.Create for "multinode-266395" (driver="kvm2")
	I0108 23:17:13.786431  420066 client.go:168] LocalClient.Create starting
	I0108 23:17:13.786464  420066 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0108 23:17:13.786503  420066 main.go:141] libmachine: Decoding PEM data...
	I0108 23:17:13.786528  420066 main.go:141] libmachine: Parsing certificate...
	I0108 23:17:13.786589  420066 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0108 23:17:13.786617  420066 main.go:141] libmachine: Decoding PEM data...
	I0108 23:17:13.786631  420066 main.go:141] libmachine: Parsing certificate...
	I0108 23:17:13.786654  420066 main.go:141] libmachine: Running pre-create checks...
	I0108 23:17:13.786666  420066 main.go:141] libmachine: (multinode-266395) Calling .PreCreateCheck
	I0108 23:17:13.787034  420066 main.go:141] libmachine: (multinode-266395) Calling .GetConfigRaw
	I0108 23:17:13.787452  420066 main.go:141] libmachine: Creating machine...
	I0108 23:17:13.787466  420066 main.go:141] libmachine: (multinode-266395) Calling .Create
	I0108 23:17:13.787581  420066 main.go:141] libmachine: (multinode-266395) Creating KVM machine...
	I0108 23:17:13.788883  420066 main.go:141] libmachine: (multinode-266395) DBG | found existing default KVM network
	I0108 23:17:13.789577  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:13.789436  420089 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147910}
	I0108 23:17:13.794464  420066 main.go:141] libmachine: (multinode-266395) DBG | trying to create private KVM network mk-multinode-266395 192.168.39.0/24...
	I0108 23:17:13.866421  420066 main.go:141] libmachine: (multinode-266395) DBG | private KVM network mk-multinode-266395 192.168.39.0/24 created
	I0108 23:17:13.866460  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:13.866397  420089 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:17:13.866475  420066 main.go:141] libmachine: (multinode-266395) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395 ...
	I0108 23:17:13.866494  420066 main.go:141] libmachine: (multinode-266395) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 23:17:13.866601  420066 main.go:141] libmachine: (multinode-266395) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 23:17:14.120555  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:14.120429  420089 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa...
	I0108 23:17:14.310484  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:14.310309  420089 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/multinode-266395.rawdisk...
	I0108 23:17:14.310531  420066 main.go:141] libmachine: (multinode-266395) DBG | Writing magic tar header
	I0108 23:17:14.310558  420066 main.go:141] libmachine: (multinode-266395) DBG | Writing SSH key tar header
	I0108 23:17:14.310573  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:14.310441  420089 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395 ...
	I0108 23:17:14.310586  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395
	I0108 23:17:14.310595  420066 main.go:141] libmachine: (multinode-266395) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395 (perms=drwx------)
	I0108 23:17:14.310602  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0108 23:17:14.310611  420066 main.go:141] libmachine: (multinode-266395) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0108 23:17:14.310621  420066 main.go:141] libmachine: (multinode-266395) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0108 23:17:14.310631  420066 main.go:141] libmachine: (multinode-266395) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0108 23:17:14.310648  420066 main.go:141] libmachine: (multinode-266395) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 23:17:14.310666  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:17:14.310675  420066 main.go:141] libmachine: (multinode-266395) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 23:17:14.310684  420066 main.go:141] libmachine: (multinode-266395) Creating domain...
	I0108 23:17:14.310691  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0108 23:17:14.310699  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 23:17:14.310708  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home/jenkins
	I0108 23:17:14.310718  420066 main.go:141] libmachine: (multinode-266395) DBG | Checking permissions on dir: /home
	I0108 23:17:14.310730  420066 main.go:141] libmachine: (multinode-266395) DBG | Skipping /home - not owner
	I0108 23:17:14.311993  420066 main.go:141] libmachine: (multinode-266395) define libvirt domain using xml: 
	I0108 23:17:14.312022  420066 main.go:141] libmachine: (multinode-266395) <domain type='kvm'>
	I0108 23:17:14.312037  420066 main.go:141] libmachine: (multinode-266395)   <name>multinode-266395</name>
	I0108 23:17:14.312047  420066 main.go:141] libmachine: (multinode-266395)   <memory unit='MiB'>2200</memory>
	I0108 23:17:14.312058  420066 main.go:141] libmachine: (multinode-266395)   <vcpu>2</vcpu>
	I0108 23:17:14.312065  420066 main.go:141] libmachine: (multinode-266395)   <features>
	I0108 23:17:14.312075  420066 main.go:141] libmachine: (multinode-266395)     <acpi/>
	I0108 23:17:14.312080  420066 main.go:141] libmachine: (multinode-266395)     <apic/>
	I0108 23:17:14.312085  420066 main.go:141] libmachine: (multinode-266395)     <pae/>
	I0108 23:17:14.312091  420066 main.go:141] libmachine: (multinode-266395)     
	I0108 23:17:14.312097  420066 main.go:141] libmachine: (multinode-266395)   </features>
	I0108 23:17:14.312105  420066 main.go:141] libmachine: (multinode-266395)   <cpu mode='host-passthrough'>
	I0108 23:17:14.312118  420066 main.go:141] libmachine: (multinode-266395)   
	I0108 23:17:14.312149  420066 main.go:141] libmachine: (multinode-266395)   </cpu>
	I0108 23:17:14.312161  420066 main.go:141] libmachine: (multinode-266395)   <os>
	I0108 23:17:14.312171  420066 main.go:141] libmachine: (multinode-266395)     <type>hvm</type>
	I0108 23:17:14.312177  420066 main.go:141] libmachine: (multinode-266395)     <boot dev='cdrom'/>
	I0108 23:17:14.312182  420066 main.go:141] libmachine: (multinode-266395)     <boot dev='hd'/>
	I0108 23:17:14.312192  420066 main.go:141] libmachine: (multinode-266395)     <bootmenu enable='no'/>
	I0108 23:17:14.312200  420066 main.go:141] libmachine: (multinode-266395)   </os>
	I0108 23:17:14.312214  420066 main.go:141] libmachine: (multinode-266395)   <devices>
	I0108 23:17:14.312230  420066 main.go:141] libmachine: (multinode-266395)     <disk type='file' device='cdrom'>
	I0108 23:17:14.312250  420066 main.go:141] libmachine: (multinode-266395)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/boot2docker.iso'/>
	I0108 23:17:14.312261  420066 main.go:141] libmachine: (multinode-266395)       <target dev='hdc' bus='scsi'/>
	I0108 23:17:14.312271  420066 main.go:141] libmachine: (multinode-266395)       <readonly/>
	I0108 23:17:14.312276  420066 main.go:141] libmachine: (multinode-266395)     </disk>
	I0108 23:17:14.312283  420066 main.go:141] libmachine: (multinode-266395)     <disk type='file' device='disk'>
	I0108 23:17:14.312296  420066 main.go:141] libmachine: (multinode-266395)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 23:17:14.312331  420066 main.go:141] libmachine: (multinode-266395)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/multinode-266395.rawdisk'/>
	I0108 23:17:14.312350  420066 main.go:141] libmachine: (multinode-266395)       <target dev='hda' bus='virtio'/>
	I0108 23:17:14.312364  420066 main.go:141] libmachine: (multinode-266395)     </disk>
	I0108 23:17:14.312375  420066 main.go:141] libmachine: (multinode-266395)     <interface type='network'>
	I0108 23:17:14.312382  420066 main.go:141] libmachine: (multinode-266395)       <source network='mk-multinode-266395'/>
	I0108 23:17:14.312390  420066 main.go:141] libmachine: (multinode-266395)       <model type='virtio'/>
	I0108 23:17:14.312397  420066 main.go:141] libmachine: (multinode-266395)     </interface>
	I0108 23:17:14.312409  420066 main.go:141] libmachine: (multinode-266395)     <interface type='network'>
	I0108 23:17:14.312419  420066 main.go:141] libmachine: (multinode-266395)       <source network='default'/>
	I0108 23:17:14.312427  420066 main.go:141] libmachine: (multinode-266395)       <model type='virtio'/>
	I0108 23:17:14.312434  420066 main.go:141] libmachine: (multinode-266395)     </interface>
	I0108 23:17:14.312441  420066 main.go:141] libmachine: (multinode-266395)     <serial type='pty'>
	I0108 23:17:14.312447  420066 main.go:141] libmachine: (multinode-266395)       <target port='0'/>
	I0108 23:17:14.312454  420066 main.go:141] libmachine: (multinode-266395)     </serial>
	I0108 23:17:14.312493  420066 main.go:141] libmachine: (multinode-266395)     <console type='pty'>
	I0108 23:17:14.312526  420066 main.go:141] libmachine: (multinode-266395)       <target type='serial' port='0'/>
	I0108 23:17:14.312545  420066 main.go:141] libmachine: (multinode-266395)     </console>
	I0108 23:17:14.312563  420066 main.go:141] libmachine: (multinode-266395)     <rng model='virtio'>
	I0108 23:17:14.312579  420066 main.go:141] libmachine: (multinode-266395)       <backend model='random'>/dev/random</backend>
	I0108 23:17:14.312595  420066 main.go:141] libmachine: (multinode-266395)     </rng>
	I0108 23:17:14.312608  420066 main.go:141] libmachine: (multinode-266395)     
	I0108 23:17:14.312619  420066 main.go:141] libmachine: (multinode-266395)     
	I0108 23:17:14.312640  420066 main.go:141] libmachine: (multinode-266395)   </devices>
	I0108 23:17:14.312657  420066 main.go:141] libmachine: (multinode-266395) </domain>
	I0108 23:17:14.312669  420066 main.go:141] libmachine: (multinode-266395) 
	I0108 23:17:14.317021  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:35:f4:96 in network default
	I0108 23:17:14.317676  420066 main.go:141] libmachine: (multinode-266395) Ensuring networks are active...
	I0108 23:17:14.317701  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:14.318458  420066 main.go:141] libmachine: (multinode-266395) Ensuring network default is active
	I0108 23:17:14.318775  420066 main.go:141] libmachine: (multinode-266395) Ensuring network mk-multinode-266395 is active
	I0108 23:17:14.319257  420066 main.go:141] libmachine: (multinode-266395) Getting domain xml...
	I0108 23:17:14.319965  420066 main.go:141] libmachine: (multinode-266395) Creating domain...
	I0108 23:17:15.545266  420066 main.go:141] libmachine: (multinode-266395) Waiting to get IP...
	I0108 23:17:15.546070  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:15.546427  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:15.546458  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:15.546403  420089 retry.go:31] will retry after 304.924832ms: waiting for machine to come up
	I0108 23:17:15.853161  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:15.853609  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:15.853640  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:15.853552  420089 retry.go:31] will retry after 346.983202ms: waiting for machine to come up
	I0108 23:17:16.202117  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:16.202559  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:16.202605  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:16.202489  420089 retry.go:31] will retry after 433.92928ms: waiting for machine to come up
	I0108 23:17:16.638095  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:16.638594  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:16.638625  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:16.638545  420089 retry.go:31] will retry after 388.110309ms: waiting for machine to come up
	I0108 23:17:17.028377  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:17.028970  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:17.028991  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:17.028911  420089 retry.go:31] will retry after 696.561349ms: waiting for machine to come up
	I0108 23:17:17.726795  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:17.727257  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:17.727287  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:17.727202  420089 retry.go:31] will retry after 805.577586ms: waiting for machine to come up
	I0108 23:17:18.533972  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:18.534350  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:18.534388  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:18.534322  420089 retry.go:31] will retry after 1.097276876s: waiting for machine to come up
	I0108 23:17:19.633085  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:19.633512  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:19.633559  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:19.633443  420089 retry.go:31] will retry after 1.330320038s: waiting for machine to come up
	I0108 23:17:20.965827  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:20.966291  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:20.966325  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:20.966243  420089 retry.go:31] will retry after 1.778362844s: waiting for machine to come up
	I0108 23:17:22.747441  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:22.747858  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:22.747883  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:22.747816  420089 retry.go:31] will retry after 1.856346939s: waiting for machine to come up
	I0108 23:17:24.605384  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:24.605827  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:24.605858  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:24.605788  420089 retry.go:31] will retry after 2.651998206s: waiting for machine to come up
	I0108 23:17:27.260821  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:27.261303  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:27.261337  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:27.261240  420089 retry.go:31] will retry after 3.487419457s: waiting for machine to come up
	I0108 23:17:30.751328  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:30.751749  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:30.751782  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:30.751700  420089 retry.go:31] will retry after 2.76159531s: waiting for machine to come up
	I0108 23:17:33.516745  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:33.517200  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:17:33.517231  420066 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:17:33.517150  420089 retry.go:31] will retry after 3.524172582s: waiting for machine to come up
	I0108 23:17:37.045568  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.046007  420066 main.go:141] libmachine: (multinode-266395) Found IP for machine: 192.168.39.18
	I0108 23:17:37.046043  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has current primary IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.046061  420066 main.go:141] libmachine: (multinode-266395) Reserving static IP address...
	I0108 23:17:37.046344  420066 main.go:141] libmachine: (multinode-266395) DBG | unable to find host DHCP lease matching {name: "multinode-266395", mac: "52:54:00:64:1d:b6", ip: "192.168.39.18"} in network mk-multinode-266395
	I0108 23:17:37.120632  420066 main.go:141] libmachine: (multinode-266395) DBG | Getting to WaitForSSH function...
	I0108 23:17:37.120663  420066 main.go:141] libmachine: (multinode-266395) Reserved static IP address: 192.168.39.18
	I0108 23:17:37.120676  420066 main.go:141] libmachine: (multinode-266395) Waiting for SSH to be available...
	I0108 23:17:37.123747  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.124120  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.124152  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.124336  420066 main.go:141] libmachine: (multinode-266395) DBG | Using SSH client type: external
	I0108 23:17:37.124389  420066 main.go:141] libmachine: (multinode-266395) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa (-rw-------)
	I0108 23:17:37.124437  420066 main.go:141] libmachine: (multinode-266395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 23:17:37.124461  420066 main.go:141] libmachine: (multinode-266395) DBG | About to run SSH command:
	I0108 23:17:37.124474  420066 main.go:141] libmachine: (multinode-266395) DBG | exit 0
	I0108 23:17:37.215527  420066 main.go:141] libmachine: (multinode-266395) DBG | SSH cmd err, output: <nil>: 
	I0108 23:17:37.215842  420066 main.go:141] libmachine: (multinode-266395) KVM machine creation complete!
	I0108 23:17:37.216145  420066 main.go:141] libmachine: (multinode-266395) Calling .GetConfigRaw
	I0108 23:17:37.216754  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:37.216971  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:37.217158  420066 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 23:17:37.217173  420066 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:17:37.218363  420066 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 23:17:37.218377  420066 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 23:17:37.218383  420066 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 23:17:37.218390  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.220626  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.220976  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.221007  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.221128  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:37.221299  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.221467  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.221604  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:37.221766  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:17:37.222147  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:17:37.222162  420066 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 23:17:37.342432  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:17:37.342463  420066 main.go:141] libmachine: Detecting the provisioner...
	I0108 23:17:37.342476  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.345441  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.345828  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.345861  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.345983  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:37.346194  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.346385  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.346519  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:37.346727  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:17:37.347107  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:17:37.347120  420066 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 23:17:37.472394  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 23:17:37.472497  420066 main.go:141] libmachine: found compatible host: buildroot
	I0108 23:17:37.472514  420066 main.go:141] libmachine: Provisioning with buildroot...
	I0108 23:17:37.472524  420066 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:17:37.472788  420066 buildroot.go:166] provisioning hostname "multinode-266395"
	I0108 23:17:37.472819  420066 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:17:37.472997  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.476048  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.476371  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.476394  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.476547  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:37.476718  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.476892  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.477021  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:37.477173  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:17:37.477491  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:17:37.477503  420066 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266395 && echo "multinode-266395" | sudo tee /etc/hostname
	I0108 23:17:37.611646  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266395
	
	I0108 23:17:37.611683  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.614680  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.615091  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.615122  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.615278  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:37.615511  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.615664  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.615789  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:37.615917  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:17:37.616223  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:17:37.616239  420066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266395/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:17:37.749034  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:17:37.749071  420066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:17:37.749112  420066 buildroot.go:174] setting up certificates
	I0108 23:17:37.749121  420066 provision.go:83] configureAuth start
	I0108 23:17:37.749134  420066 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:17:37.749437  420066 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:17:37.752090  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.752445  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.752468  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.752654  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.754954  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.755266  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.755294  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.755460  420066 provision.go:138] copyHostCerts
	I0108 23:17:37.755504  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:17:37.755546  420066 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:17:37.755556  420066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:17:37.755605  420066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:17:37.755688  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:17:37.755707  420066 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:17:37.755716  420066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:17:37.755735  420066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:17:37.755830  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:17:37.755860  420066 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:17:37.755869  420066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:17:37.755902  420066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:17:37.755980  420066 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.multinode-266395 san=[192.168.39.18 192.168.39.18 localhost 127.0.0.1 minikube multinode-266395]
	I0108 23:17:37.828195  420066 provision.go:172] copyRemoteCerts
	I0108 23:17:37.828259  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:17:37.828286  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.830897  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.831289  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.831322  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.831481  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:37.831688  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.831885  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:37.832022  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:17:37.921130  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:17:37.921229  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 23:17:37.944414  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:17:37.944495  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:17:37.968002  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:17:37.968095  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:17:37.989715  420066 provision.go:86] duration metric: configureAuth took 240.561538ms
	I0108 23:17:37.989754  420066 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:17:37.989982  420066 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:17:37.990098  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:37.992869  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.993260  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:37.993307  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:37.993492  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:37.993719  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.993913  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:37.994061  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:37.994252  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:17:37.994611  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:17:37.994629  420066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:17:38.300383  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:17:38.300419  420066 main.go:141] libmachine: Checking connection to Docker...
	I0108 23:17:38.300435  420066 main.go:141] libmachine: (multinode-266395) Calling .GetURL
	I0108 23:17:38.301903  420066 main.go:141] libmachine: (multinode-266395) DBG | Using libvirt version 6000000
	I0108 23:17:38.304000  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.304363  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.304416  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.304538  420066 main.go:141] libmachine: Docker is up and running!
	I0108 23:17:38.304553  420066 main.go:141] libmachine: Reticulating splines...
	I0108 23:17:38.304561  420066 client.go:171] LocalClient.Create took 24.518119272s
	I0108 23:17:38.304586  420066 start.go:167] duration metric: libmachine.API.Create for "multinode-266395" took 24.518186791s
	I0108 23:17:38.304609  420066 start.go:300] post-start starting for "multinode-266395" (driver="kvm2")
	I0108 23:17:38.304623  420066 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:17:38.304639  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:38.304865  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:17:38.304887  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:38.307313  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.307675  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.307694  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.307831  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:38.308018  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:38.308186  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:38.308343  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:17:38.396365  420066 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:17:38.400134  420066 command_runner.go:130] > NAME=Buildroot
	I0108 23:17:38.400159  420066 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 23:17:38.400166  420066 command_runner.go:130] > ID=buildroot
	I0108 23:17:38.400175  420066 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 23:17:38.400182  420066 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 23:17:38.400253  420066 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:17:38.400280  420066 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:17:38.400349  420066 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:17:38.400447  420066 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:17:38.400458  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /etc/ssl/certs/4070942.pem
	I0108 23:17:38.400562  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:17:38.408360  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:17:38.430675  420066 start.go:303] post-start completed in 126.04682ms
	I0108 23:17:38.430733  420066 main.go:141] libmachine: (multinode-266395) Calling .GetConfigRaw
	I0108 23:17:38.431329  420066 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:17:38.434227  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.434553  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.434585  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.434845  420066 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:17:38.435067  420066 start.go:128] duration metric: createHost completed in 24.666260662s
	I0108 23:17:38.435096  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:38.437663  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.438025  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.438056  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.438249  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:38.438455  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:38.438623  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:38.438792  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:38.438977  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:17:38.439303  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:17:38.439316  420066 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:17:38.559975  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704755858.529871968
	
	I0108 23:17:38.560021  420066 fix.go:206] guest clock: 1704755858.529871968
	I0108 23:17:38.560033  420066 fix.go:219] Guest: 2024-01-08 23:17:38.529871968 +0000 UTC Remote: 2024-01-08 23:17:38.435082285 +0000 UTC m=+24.790479525 (delta=94.789683ms)
	I0108 23:17:38.560059  420066 fix.go:190] guest clock delta is within tolerance: 94.789683ms
	I0108 23:17:38.560064  420066 start.go:83] releasing machines lock for "multinode-266395", held for 24.791334348s
	I0108 23:17:38.560089  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:38.560376  420066 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:17:38.563096  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.563456  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.563489  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.563672  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:38.564161  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:38.564331  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:17:38.564434  420066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:17:38.564484  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:38.564540  420066 ssh_runner.go:195] Run: cat /version.json
	I0108 23:17:38.564571  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:17:38.566988  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.567319  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.567367  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.567391  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.567537  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:38.567710  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:38.567748  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:38.567804  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:38.567907  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:38.567908  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:17:38.568081  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:17:38.568078  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:17:38.568331  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:17:38.568472  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:17:38.651857  420066 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0108 23:17:38.652056  420066 ssh_runner.go:195] Run: systemctl --version
	I0108 23:17:38.678978  420066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:17:38.679070  420066 command_runner.go:130] > systemd 247 (247)
	I0108 23:17:38.679096  420066 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 23:17:38.679156  420066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:17:38.841473  420066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:17:38.847813  420066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 23:17:38.848364  420066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:17:38.848434  420066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:17:38.862327  420066 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 23:17:38.862415  420066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:17:38.862432  420066 start.go:475] detecting cgroup driver to use...
	I0108 23:17:38.862503  420066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:17:38.874797  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:17:38.886352  420066 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:17:38.886401  420066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:17:38.898449  420066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:17:38.910229  420066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:17:39.009624  420066 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 23:17:39.009744  420066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:17:39.023798  420066 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 23:17:39.120683  420066 docker.go:219] disabling docker service ...
	I0108 23:17:39.120777  420066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:17:39.134445  420066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:17:39.146015  420066 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 23:17:39.146186  420066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:17:39.242248  420066 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 23:17:39.242347  420066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:17:39.254568  420066 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 23:17:39.254603  420066 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 23:17:39.339792  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:17:39.351904  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:17:39.368688  420066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:17:39.368736  420066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:17:39.368803  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:17:39.377468  420066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:17:39.377526  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:17:39.386195  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:17:39.394962  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:17:39.403830  420066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:17:39.413111  420066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:17:39.420813  420066 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:17:39.420885  420066 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:17:39.420954  420066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 23:17:39.434477  420066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:17:39.444504  420066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:17:39.540792  420066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:17:39.697956  420066 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:17:39.698025  420066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:17:39.702551  420066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:17:39.702592  420066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:17:39.702608  420066 command_runner.go:130] > Device: 16h/22d	Inode: 745         Links: 1
	I0108 23:17:39.702625  420066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:17:39.702634  420066 command_runner.go:130] > Access: 2024-01-08 23:17:39.656806798 +0000
	I0108 23:17:39.702652  420066 command_runner.go:130] > Modify: 2024-01-08 23:17:39.656806798 +0000
	I0108 23:17:39.702661  420066 command_runner.go:130] > Change: 2024-01-08 23:17:39.656806798 +0000
	I0108 23:17:39.702667  420066 command_runner.go:130] >  Birth: -
	I0108 23:17:39.702768  420066 start.go:543] Will wait 60s for crictl version
	I0108 23:17:39.702824  420066 ssh_runner.go:195] Run: which crictl
	I0108 23:17:39.706545  420066 command_runner.go:130] > /usr/bin/crictl
	I0108 23:17:39.706662  420066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:17:39.745797  420066 command_runner.go:130] > Version:  0.1.0
	I0108 23:17:39.745825  420066 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:17:39.745830  420066 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 23:17:39.745835  420066 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:17:39.747940  420066 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:17:39.748025  420066 ssh_runner.go:195] Run: crio --version
	I0108 23:17:39.791932  420066 command_runner.go:130] > crio version 1.24.1
	I0108 23:17:39.791966  420066 command_runner.go:130] > Version:          1.24.1
	I0108 23:17:39.791973  420066 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:17:39.791988  420066 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:17:39.791998  420066 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:17:39.792006  420066 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:17:39.792013  420066 command_runner.go:130] > Compiler:         gc
	I0108 23:17:39.792025  420066 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:17:39.792034  420066 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:17:39.792056  420066 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:17:39.792067  420066 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:17:39.792074  420066 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:17:39.792171  420066 ssh_runner.go:195] Run: crio --version
	I0108 23:17:39.831782  420066 command_runner.go:130] > crio version 1.24.1
	I0108 23:17:39.831803  420066 command_runner.go:130] > Version:          1.24.1
	I0108 23:17:39.831809  420066 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:17:39.831813  420066 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:17:39.831826  420066 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:17:39.831833  420066 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:17:39.831840  420066 command_runner.go:130] > Compiler:         gc
	I0108 23:17:39.831847  420066 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:17:39.831859  420066 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:17:39.831871  420066 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:17:39.831882  420066 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:17:39.831889  420066 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:17:39.834972  420066 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 23:17:39.836289  420066 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:17:39.838978  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:39.839298  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:17:39.839328  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:17:39.839563  420066 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:17:39.843866  420066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:17:39.855420  420066 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:17:39.855508  420066 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:17:39.889104  420066 command_runner.go:130] > {
	I0108 23:17:39.889129  420066 command_runner.go:130] >   "images": [
	I0108 23:17:39.889135  420066 command_runner.go:130] >   ]
	I0108 23:17:39.889140  420066 command_runner.go:130] > }
	I0108 23:17:39.889332  420066 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 23:17:39.889406  420066 ssh_runner.go:195] Run: which lz4
	I0108 23:17:39.893145  420066 command_runner.go:130] > /usr/bin/lz4
	I0108 23:17:39.893530  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 23:17:39.893629  420066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 23:17:39.898004  420066 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:17:39.898048  420066 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:17:39.898066  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 23:17:41.731478  420066 crio.go:444] Took 1.837876 seconds to copy over tarball
	I0108 23:17:41.731589  420066 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 23:17:44.546590  420066 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.814958388s)
	I0108 23:17:44.546622  420066 crio.go:451] Took 2.815112 seconds to extract the tarball
	I0108 23:17:44.546634  420066 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 23:17:44.588067  420066 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:17:44.657196  420066 command_runner.go:130] > {
	I0108 23:17:44.657224  420066 command_runner.go:130] >   "images": [
	I0108 23:17:44.657229  420066 command_runner.go:130] >     {
	I0108 23:17:44.657240  420066 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 23:17:44.657246  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.657255  420066 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 23:17:44.657261  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657269  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.657285  420066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 23:17:44.657304  420066 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 23:17:44.657311  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657316  420066 command_runner.go:130] >       "size": "65258016",
	I0108 23:17:44.657320  420066 command_runner.go:130] >       "uid": null,
	I0108 23:17:44.657324  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.657335  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.657339  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.657343  420066 command_runner.go:130] >     },
	I0108 23:17:44.657346  420066 command_runner.go:130] >     {
	I0108 23:17:44.657352  420066 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 23:17:44.657358  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.657370  420066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 23:17:44.657381  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657388  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.657404  420066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 23:17:44.657416  420066 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 23:17:44.657422  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657430  420066 command_runner.go:130] >       "size": "31470524",
	I0108 23:17:44.657437  420066 command_runner.go:130] >       "uid": null,
	I0108 23:17:44.657442  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.657446  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.657453  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.657462  420066 command_runner.go:130] >     },
	I0108 23:17:44.657478  420066 command_runner.go:130] >     {
	I0108 23:17:44.657492  420066 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 23:17:44.657500  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.657511  420066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 23:17:44.657518  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657523  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.657532  420066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 23:17:44.657544  420066 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 23:17:44.657554  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657561  420066 command_runner.go:130] >       "size": "53621675",
	I0108 23:17:44.657572  420066 command_runner.go:130] >       "uid": null,
	I0108 23:17:44.657582  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.657589  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.657599  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.657605  420066 command_runner.go:130] >     },
	I0108 23:17:44.657613  420066 command_runner.go:130] >     {
	I0108 23:17:44.657620  420066 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 23:17:44.657629  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.657646  420066 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 23:17:44.657655  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657663  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.657677  420066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 23:17:44.657691  420066 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 23:17:44.657706  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657716  420066 command_runner.go:130] >       "size": "295456551",
	I0108 23:17:44.657724  420066 command_runner.go:130] >       "uid": {
	I0108 23:17:44.657735  420066 command_runner.go:130] >         "value": "0"
	I0108 23:17:44.657741  420066 command_runner.go:130] >       },
	I0108 23:17:44.657751  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.657758  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.657769  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.657778  420066 command_runner.go:130] >     },
	I0108 23:17:44.657784  420066 command_runner.go:130] >     {
	I0108 23:17:44.657792  420066 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 23:17:44.657799  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.657811  420066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 23:17:44.657822  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657832  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.657848  420066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 23:17:44.657863  420066 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 23:17:44.657871  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657875  420066 command_runner.go:130] >       "size": "127226832",
	I0108 23:17:44.657883  420066 command_runner.go:130] >       "uid": {
	I0108 23:17:44.657890  420066 command_runner.go:130] >         "value": "0"
	I0108 23:17:44.657896  420066 command_runner.go:130] >       },
	I0108 23:17:44.657903  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.657913  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.657922  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.657929  420066 command_runner.go:130] >     },
	I0108 23:17:44.657937  420066 command_runner.go:130] >     {
	I0108 23:17:44.657952  420066 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 23:17:44.657960  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.657967  420066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 23:17:44.657976  420066 command_runner.go:130] >       ],
	I0108 23:17:44.657987  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.658008  420066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 23:17:44.658024  420066 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 23:17:44.658032  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658040  420066 command_runner.go:130] >       "size": "123261750",
	I0108 23:17:44.658047  420066 command_runner.go:130] >       "uid": {
	I0108 23:17:44.658052  420066 command_runner.go:130] >         "value": "0"
	I0108 23:17:44.658060  420066 command_runner.go:130] >       },
	I0108 23:17:44.658077  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.658087  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.658095  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.658104  420066 command_runner.go:130] >     },
	I0108 23:17:44.658110  420066 command_runner.go:130] >     {
	I0108 23:17:44.658123  420066 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 23:17:44.658131  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.658136  420066 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 23:17:44.658144  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658151  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.658173  420066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 23:17:44.658189  420066 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 23:17:44.658198  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658205  420066 command_runner.go:130] >       "size": "74749335",
	I0108 23:17:44.658214  420066 command_runner.go:130] >       "uid": null,
	I0108 23:17:44.658219  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.658227  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.658237  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.658244  420066 command_runner.go:130] >     },
	I0108 23:17:44.658253  420066 command_runner.go:130] >     {
	I0108 23:17:44.658263  420066 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 23:17:44.658273  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.658282  420066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 23:17:44.658291  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658298  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.658328  420066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 23:17:44.658344  420066 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 23:17:44.658353  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658364  420066 command_runner.go:130] >       "size": "61551410",
	I0108 23:17:44.658374  420066 command_runner.go:130] >       "uid": {
	I0108 23:17:44.658381  420066 command_runner.go:130] >         "value": "0"
	I0108 23:17:44.658389  420066 command_runner.go:130] >       },
	I0108 23:17:44.658393  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.658402  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.658410  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.658419  420066 command_runner.go:130] >     },
	I0108 23:17:44.658426  420066 command_runner.go:130] >     {
	I0108 23:17:44.658439  420066 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 23:17:44.658453  420066 command_runner.go:130] >       "repoTags": [
	I0108 23:17:44.658464  420066 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 23:17:44.658473  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658478  420066 command_runner.go:130] >       "repoDigests": [
	I0108 23:17:44.658487  420066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 23:17:44.658501  420066 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 23:17:44.658511  420066 command_runner.go:130] >       ],
	I0108 23:17:44.658518  420066 command_runner.go:130] >       "size": "750414",
	I0108 23:17:44.658530  420066 command_runner.go:130] >       "uid": {
	I0108 23:17:44.658539  420066 command_runner.go:130] >         "value": "65535"
	I0108 23:17:44.658544  420066 command_runner.go:130] >       },
	I0108 23:17:44.658550  420066 command_runner.go:130] >       "username": "",
	I0108 23:17:44.658560  420066 command_runner.go:130] >       "spec": null,
	I0108 23:17:44.658566  420066 command_runner.go:130] >       "pinned": false
	I0108 23:17:44.658574  420066 command_runner.go:130] >     }
	I0108 23:17:44.658580  420066 command_runner.go:130] >   ]
	I0108 23:17:44.658593  420066 command_runner.go:130] > }
	I0108 23:17:44.658787  420066 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 23:17:44.658806  420066 cache_images.go:84] Images are preloaded, skipping loading
	I0108 23:17:44.658903  420066 ssh_runner.go:195] Run: crio config
	I0108 23:17:44.708859  420066 command_runner.go:130] ! time="2024-01-08 23:17:44.687598068Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 23:17:44.708949  420066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:17:44.719202  420066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:17:44.719246  420066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:17:44.719257  420066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:17:44.719268  420066 command_runner.go:130] > #
	I0108 23:17:44.719283  420066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:17:44.719294  420066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:17:44.719311  420066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:17:44.719322  420066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:17:44.719329  420066 command_runner.go:130] > # reload'.
	I0108 23:17:44.719335  420066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:17:44.719343  420066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:17:44.719349  420066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:17:44.719379  420066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:17:44.719392  420066 command_runner.go:130] > [crio]
	I0108 23:17:44.719403  420066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:17:44.719409  420066 command_runner.go:130] > # containers images, in this directory.
	I0108 23:17:44.719416  420066 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 23:17:44.719428  420066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:17:44.719436  420066 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 23:17:44.719444  420066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:17:44.719453  420066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:17:44.719460  420066 command_runner.go:130] > storage_driver = "overlay"
	I0108 23:17:44.719466  420066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:17:44.719474  420066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:17:44.719478  420066 command_runner.go:130] > storage_option = [
	I0108 23:17:44.719483  420066 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 23:17:44.719488  420066 command_runner.go:130] > ]
	I0108 23:17:44.719495  420066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:17:44.719504  420066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:17:44.719508  420066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:17:44.719517  420066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:17:44.719525  420066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:17:44.719532  420066 command_runner.go:130] > # always happen on a node reboot
	I0108 23:17:44.719537  420066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:17:44.719545  420066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:17:44.719553  420066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:17:44.719565  420066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:17:44.719576  420066 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:17:44.719589  420066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:17:44.719605  420066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:17:44.719614  420066 command_runner.go:130] > # internal_wipe = true
	I0108 23:17:44.719626  420066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:17:44.719638  420066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:17:44.719650  420066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:17:44.719662  420066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:17:44.719673  420066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:17:44.719682  420066 command_runner.go:130] > [crio.api]
	I0108 23:17:44.719694  420066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:17:44.719704  420066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:17:44.719716  420066 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:17:44.719725  420066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:17:44.719740  420066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:17:44.719752  420066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:17:44.719760  420066 command_runner.go:130] > # stream_port = "0"
	I0108 23:17:44.719765  420066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:17:44.719775  420066 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:17:44.719783  420066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:17:44.719787  420066 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:17:44.719796  420066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:17:44.719804  420066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:17:44.719810  420066 command_runner.go:130] > # minutes.
	I0108 23:17:44.719815  420066 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:17:44.719823  420066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:17:44.719831  420066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:17:44.719837  420066 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:17:44.719843  420066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:17:44.719852  420066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:17:44.719870  420066 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:17:44.719877  420066 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:17:44.719884  420066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:17:44.719890  420066 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 23:17:44.719897  420066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:17:44.719904  420066 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 23:17:44.719929  420066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:17:44.719941  420066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:17:44.719945  420066 command_runner.go:130] > [crio.runtime]
	I0108 23:17:44.719951  420066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:17:44.719958  420066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:17:44.719963  420066 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:17:44.719971  420066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:17:44.719977  420066 command_runner.go:130] > # default_ulimits = [
	I0108 23:17:44.719981  420066 command_runner.go:130] > # ]
	I0108 23:17:44.719989  420066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:17:44.719995  420066 command_runner.go:130] > # no_pivot = false
	I0108 23:17:44.720001  420066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:17:44.720010  420066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:17:44.720017  420066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:17:44.720025  420066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:17:44.720032  420066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:17:44.720039  420066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:17:44.720045  420066 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 23:17:44.720052  420066 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:17:44.720062  420066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:17:44.720068  420066 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:17:44.720075  420066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:17:44.720083  420066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:17:44.720090  420066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:17:44.720096  420066 command_runner.go:130] > conmon_env = [
	I0108 23:17:44.720102  420066 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 23:17:44.720108  420066 command_runner.go:130] > ]
	I0108 23:17:44.720114  420066 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:17:44.720121  420066 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:17:44.720126  420066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:17:44.720133  420066 command_runner.go:130] > # default_env = [
	I0108 23:17:44.720136  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720144  420066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:17:44.720148  420066 command_runner.go:130] > # selinux = false
	I0108 23:17:44.720155  420066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:17:44.720167  420066 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:17:44.720196  420066 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:17:44.720214  420066 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:17:44.720222  420066 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:17:44.720230  420066 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:17:44.720238  420066 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:17:44.720244  420066 command_runner.go:130] > # which might increase security.
	I0108 23:17:44.720249  420066 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 23:17:44.720257  420066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:17:44.720264  420066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:17:44.720273  420066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:17:44.720281  420066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:17:44.720287  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:17:44.720294  420066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:17:44.720299  420066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:17:44.720305  420066 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:17:44.720310  420066 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:17:44.720317  420066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:17:44.720323  420066 command_runner.go:130] > # irqbalance daemon.
	I0108 23:17:44.720330  420066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:17:44.720339  420066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:17:44.720346  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:17:44.720350  420066 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:17:44.720362  420066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:17:44.720369  420066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:17:44.720375  420066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:17:44.720382  420066 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:17:44.720388  420066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:17:44.720396  420066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:17:44.720402  420066 command_runner.go:130] > # will be added.
	I0108 23:17:44.720406  420066 command_runner.go:130] > # default_capabilities = [
	I0108 23:17:44.720412  420066 command_runner.go:130] > # 	"CHOWN",
	I0108 23:17:44.720416  420066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:17:44.720422  420066 command_runner.go:130] > # 	"FSETID",
	I0108 23:17:44.720426  420066 command_runner.go:130] > # 	"FOWNER",
	I0108 23:17:44.720432  420066 command_runner.go:130] > # 	"SETGID",
	I0108 23:17:44.720436  420066 command_runner.go:130] > # 	"SETUID",
	I0108 23:17:44.720445  420066 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:17:44.720452  420066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:17:44.720455  420066 command_runner.go:130] > # 	"KILL",
	I0108 23:17:44.720462  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720468  420066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:17:44.720475  420066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:17:44.720480  420066 command_runner.go:130] > # default_sysctls = [
	I0108 23:17:44.720484  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720489  420066 command_runner.go:130] > # List of devices on the host that a
	I0108 23:17:44.720496  420066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:17:44.720502  420066 command_runner.go:130] > # allowed_devices = [
	I0108 23:17:44.720506  420066 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:17:44.720512  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720517  420066 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:17:44.720527  420066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:17:44.720534  420066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:17:44.720570  420066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:17:44.720577  420066 command_runner.go:130] > # additional_devices = [
	I0108 23:17:44.720583  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720595  420066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:17:44.720605  420066 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:17:44.720614  420066 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:17:44.720624  420066 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:17:44.720632  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720645  420066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:17:44.720657  420066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:17:44.720666  420066 command_runner.go:130] > # Defaults to false.
	I0108 23:17:44.720677  420066 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:17:44.720687  420066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:17:44.720695  420066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:17:44.720703  420066 command_runner.go:130] > # hooks_dir = [
	I0108 23:17:44.720711  420066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:17:44.720717  420066 command_runner.go:130] > # ]
	I0108 23:17:44.720723  420066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:17:44.720732  420066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:17:44.720737  420066 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:17:44.720746  420066 command_runner.go:130] > #
	I0108 23:17:44.720755  420066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:17:44.720763  420066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:17:44.720771  420066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:17:44.720776  420066 command_runner.go:130] > #
	I0108 23:17:44.720782  420066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:17:44.720791  420066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:17:44.720799  420066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:17:44.720807  420066 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:17:44.720810  420066 command_runner.go:130] > #
	I0108 23:17:44.720817  420066 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:17:44.720823  420066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:17:44.720831  420066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:17:44.720836  420066 command_runner.go:130] > pids_limit = 1024
	I0108 23:17:44.720846  420066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:17:44.720858  420066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:17:44.720867  420066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:17:44.720877  420066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:17:44.720887  420066 command_runner.go:130] > # log_size_max = -1
	I0108 23:17:44.720896  420066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:17:44.720902  420066 command_runner.go:130] > # log_to_journald = false
	I0108 23:17:44.720908  420066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:17:44.720915  420066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:17:44.720921  420066 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:17:44.720928  420066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:17:44.720933  420066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:17:44.720939  420066 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:17:44.720945  420066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:17:44.720951  420066 command_runner.go:130] > # read_only = false
	I0108 23:17:44.720958  420066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:17:44.720966  420066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:17:44.720971  420066 command_runner.go:130] > # live configuration reload.
	I0108 23:17:44.720977  420066 command_runner.go:130] > # log_level = "info"
	I0108 23:17:44.720983  420066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:17:44.720990  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:17:44.720994  420066 command_runner.go:130] > # log_filter = ""
	I0108 23:17:44.721006  420066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:17:44.721014  420066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:17:44.721019  420066 command_runner.go:130] > # separated by comma.
	I0108 23:17:44.721023  420066 command_runner.go:130] > # uid_mappings = ""
	I0108 23:17:44.721031  420066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:17:44.721037  420066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:17:44.721044  420066 command_runner.go:130] > # separated by comma.
	I0108 23:17:44.721048  420066 command_runner.go:130] > # gid_mappings = ""
	I0108 23:17:44.721056  420066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:17:44.721062  420066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:17:44.721069  420066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:17:44.721076  420066 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:17:44.721082  420066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:17:44.721090  420066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:17:44.721097  420066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:17:44.721103  420066 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:17:44.721109  420066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:17:44.721117  420066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:17:44.721125  420066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:17:44.721132  420066 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:17:44.721138  420066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:17:44.721146  420066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:17:44.721153  420066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:17:44.721158  420066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:17:44.721164  420066 command_runner.go:130] > drop_infra_ctr = false
	I0108 23:17:44.721170  420066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:17:44.721178  420066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:17:44.721187  420066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:17:44.721194  420066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:17:44.721200  420066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:17:44.721207  420066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:17:44.721212  420066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:17:44.721220  420066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:17:44.721227  420066 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 23:17:44.721233  420066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:17:44.721242  420066 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:17:44.721253  420066 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:17:44.721259  420066 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:17:44.721265  420066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:17:44.721274  420066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:17:44.721284  420066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:17:44.721291  420066 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:17:44.721302  420066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:17:44.721312  420066 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:17:44.721319  420066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:17:44.721322  420066 command_runner.go:130] > # ]
	I0108 23:17:44.721331  420066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:17:44.721338  420066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:17:44.721347  420066 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:17:44.721357  420066 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:17:44.721362  420066 command_runner.go:130] > #
	I0108 23:17:44.721368  420066 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:17:44.721374  420066 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:17:44.721379  420066 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:17:44.721388  420066 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:17:44.721396  420066 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:17:44.721400  420066 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:17:44.721406  420066 command_runner.go:130] > # Where:
	I0108 23:17:44.721411  420066 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:17:44.721419  420066 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:17:44.721427  420066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:17:44.721435  420066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:17:44.721441  420066 command_runner.go:130] > #   in $PATH.
	I0108 23:17:44.721448  420066 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:17:44.721455  420066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:17:44.721461  420066 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:17:44.721466  420066 command_runner.go:130] > #   state.
	I0108 23:17:44.721473  420066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:17:44.721480  420066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:17:44.721487  420066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:17:44.721495  420066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:17:44.721503  420066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:17:44.721514  420066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:17:44.721521  420066 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:17:44.721527  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:17:44.721536  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:17:44.721544  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:17:44.721551  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:17:44.721561  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:17:44.721569  420066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:17:44.721578  420066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:17:44.721587  420066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:17:44.721598  420066 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:17:44.721609  420066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:17:44.721618  420066 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 23:17:44.721627  420066 command_runner.go:130] > runtime_type = "oci"
	I0108 23:17:44.721634  420066 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:17:44.721644  420066 command_runner.go:130] > runtime_config_path = ""
	I0108 23:17:44.721653  420066 command_runner.go:130] > monitor_path = ""
	I0108 23:17:44.721662  420066 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:17:44.721676  420066 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:17:44.721685  420066 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:17:44.721691  420066 command_runner.go:130] > # running containers
	I0108 23:17:44.721696  420066 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:17:44.721704  420066 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:17:44.721770  420066 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:17:44.721785  420066 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:17:44.721792  420066 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:17:44.721800  420066 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:17:44.721805  420066 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:17:44.721811  420066 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:17:44.721816  420066 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:17:44.721823  420066 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:17:44.721829  420066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:17:44.721836  420066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:17:44.721845  420066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:17:44.721861  420066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:17:44.721871  420066 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:17:44.721882  420066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:17:44.721894  420066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:17:44.721903  420066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:17:44.721911  420066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:17:44.721920  420066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:17:44.721926  420066 command_runner.go:130] > # Example:
	I0108 23:17:44.721931  420066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:17:44.721938  420066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:17:44.721944  420066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:17:44.721951  420066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:17:44.721956  420066 command_runner.go:130] > # cpuset = 0
	I0108 23:17:44.721960  420066 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:17:44.721964  420066 command_runner.go:130] > # Where:
	I0108 23:17:44.721969  420066 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:17:44.721979  420066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:17:44.721986  420066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:17:44.721994  420066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:17:44.722003  420066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:17:44.722013  420066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:17:44.722019  420066 command_runner.go:130] > # 
	I0108 23:17:44.722026  420066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:17:44.722031  420066 command_runner.go:130] > #
	I0108 23:17:44.722037  420066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:17:44.722045  420066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:17:44.722051  420066 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:17:44.722063  420066 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:17:44.722071  420066 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:17:44.722075  420066 command_runner.go:130] > [crio.image]
	I0108 23:17:44.722081  420066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:17:44.722088  420066 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:17:44.722094  420066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:17:44.722102  420066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:17:44.722108  420066 command_runner.go:130] > # global_auth_file = ""
	I0108 23:17:44.722113  420066 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:17:44.722118  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:17:44.722123  420066 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:17:44.722136  420066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:17:44.722142  420066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:17:44.722147  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:17:44.722151  420066 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:17:44.722156  420066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:17:44.722162  420066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:17:44.722168  420066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:17:44.722173  420066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:17:44.722177  420066 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:17:44.722183  420066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:17:44.722189  420066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:17:44.722195  420066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:17:44.722201  420066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:17:44.722206  420066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:17:44.722210  420066 command_runner.go:130] > # signature_policy = ""
	I0108 23:17:44.722216  420066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:17:44.722222  420066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:17:44.722226  420066 command_runner.go:130] > # changing them here.
	I0108 23:17:44.722232  420066 command_runner.go:130] > # insecure_registries = [
	I0108 23:17:44.722235  420066 command_runner.go:130] > # ]
	I0108 23:17:44.722242  420066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:17:44.722247  420066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:17:44.722251  420066 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:17:44.722256  420066 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:17:44.722260  420066 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:17:44.722265  420066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:17:44.722271  420066 command_runner.go:130] > # CNI plugins.
	I0108 23:17:44.722275  420066 command_runner.go:130] > [crio.network]
	I0108 23:17:44.722280  420066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:17:44.722285  420066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:17:44.722289  420066 command_runner.go:130] > # cni_default_network = ""
	I0108 23:17:44.722295  420066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:17:44.722299  420066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:17:44.722307  420066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:17:44.722311  420066 command_runner.go:130] > # plugin_dirs = [
	I0108 23:17:44.722316  420066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:17:44.722322  420066 command_runner.go:130] > # ]
	I0108 23:17:44.722330  420066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:17:44.722335  420066 command_runner.go:130] > [crio.metrics]
	I0108 23:17:44.722340  420066 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:17:44.722346  420066 command_runner.go:130] > enable_metrics = true
	I0108 23:17:44.722351  420066 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:17:44.722359  420066 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:17:44.722366  420066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:17:44.722375  420066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:17:44.722383  420066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:17:44.722389  420066 command_runner.go:130] > # metrics_collectors = [
	I0108 23:17:44.722393  420066 command_runner.go:130] > # 	"operations",
	I0108 23:17:44.722400  420066 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:17:44.722405  420066 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:17:44.722411  420066 command_runner.go:130] > # 	"operations_errors",
	I0108 23:17:44.722416  420066 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:17:44.722422  420066 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:17:44.722427  420066 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:17:44.722435  420066 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:17:44.722442  420066 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:17:44.722446  420066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:17:44.722453  420066 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:17:44.722457  420066 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:17:44.722463  420066 command_runner.go:130] > # 	"containers_oom",
	I0108 23:17:44.722467  420066 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:17:44.722473  420066 command_runner.go:130] > # 	"operations_total",
	I0108 23:17:44.722478  420066 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:17:44.722485  420066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:17:44.722489  420066 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:17:44.722496  420066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:17:44.722500  420066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:17:44.722507  420066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:17:44.722511  420066 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:17:44.722518  420066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:17:44.722523  420066 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:17:44.722530  420066 command_runner.go:130] > # ]
	I0108 23:17:44.722538  420066 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:17:44.722545  420066 command_runner.go:130] > # metrics_port = 9090
	I0108 23:17:44.722550  420066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:17:44.722557  420066 command_runner.go:130] > # metrics_socket = ""
	I0108 23:17:44.722562  420066 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:17:44.722570  420066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:17:44.722576  420066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:17:44.722587  420066 command_runner.go:130] > # certificate on any modification event.
	I0108 23:17:44.722596  420066 command_runner.go:130] > # metrics_cert = ""
	I0108 23:17:44.722608  420066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:17:44.722619  420066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:17:44.722628  420066 command_runner.go:130] > # metrics_key = ""
	I0108 23:17:44.722638  420066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:17:44.722646  420066 command_runner.go:130] > [crio.tracing]
	I0108 23:17:44.722657  420066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:17:44.722667  420066 command_runner.go:130] > # enable_tracing = false
	I0108 23:17:44.722678  420066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:17:44.722688  420066 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:17:44.722709  420066 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:17:44.722719  420066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:17:44.722727  420066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:17:44.722731  420066 command_runner.go:130] > [crio.stats]
	I0108 23:17:44.722743  420066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:17:44.722757  420066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:17:44.722763  420066 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:17:44.722852  420066 cni.go:84] Creating CNI manager for ""
	I0108 23:17:44.722867  420066 cni.go:136] 1 nodes found, recommending kindnet
	I0108 23:17:44.722889  420066 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:17:44.722910  420066 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266395 NodeName:multinode-266395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:17:44.723061  420066 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:17:44.723139  420066 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-266395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:17:44.723192  420066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:17:44.733050  420066 command_runner.go:130] > kubeadm
	I0108 23:17:44.733069  420066 command_runner.go:130] > kubectl
	I0108 23:17:44.733073  420066 command_runner.go:130] > kubelet
	I0108 23:17:44.733265  420066 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:17:44.733332  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 23:17:44.742325  420066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0108 23:17:44.757651  420066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:17:44.772851  420066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0108 23:17:44.788461  420066 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0108 23:17:44.792312  420066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:17:44.804009  420066 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395 for IP: 192.168.39.18
	I0108 23:17:44.804053  420066 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:44.804249  420066 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:17:44.804302  420066 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:17:44.804349  420066 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key
	I0108 23:17:44.804366  420066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt with IP's: []
	I0108 23:17:44.910447  420066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt ...
	I0108 23:17:44.910483  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt: {Name:mk6c4b03663430921492987a03f4704b243630e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:44.910652  420066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key ...
	I0108 23:17:44.910663  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key: {Name:mk9a5e470e3583f7d8b68302dc06255278fc9627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:44.910737  420066 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key.c202909e
	I0108 23:17:44.910751  420066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt.c202909e with IP's: [192.168.39.18 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 23:17:45.142682  420066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt.c202909e ...
	I0108 23:17:45.142719  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt.c202909e: {Name:mk6a88ae587c2aab454a0326ff3328ce12b46ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:45.142889  420066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key.c202909e ...
	I0108 23:17:45.142926  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key.c202909e: {Name:mk2499d03c07fa708fe394905d833701bde54fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:45.143014  420066 certs.go:337] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt.c202909e -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt
	I0108 23:17:45.143089  420066 certs.go:341] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key.c202909e -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key
	I0108 23:17:45.143143  420066 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key
	I0108 23:17:45.143157  420066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt with IP's: []
	I0108 23:17:45.380509  420066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt ...
	I0108 23:17:45.380544  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt: {Name:mkdb0d5ce8d3785d0c749d7bbcf26909e96ac131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:45.380711  420066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key ...
	I0108 23:17:45.380724  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key: {Name:mk2dfa5acc732e18cb9b031f411cd1d7cff93498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:17:45.380791  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 23:17:45.380809  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 23:17:45.380819  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 23:17:45.380831  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 23:17:45.380855  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:17:45.380872  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:17:45.380884  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:17:45.380899  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:17:45.380947  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:17:45.380982  420066 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:17:45.380994  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:17:45.381019  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:17:45.381042  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:17:45.381066  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:17:45.381131  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:17:45.381159  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /usr/share/ca-certificates/4070942.pem
	I0108 23:17:45.381177  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:17:45.381192  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem -> /usr/share/ca-certificates/407094.pem
	I0108 23:17:45.381804  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 23:17:45.405515  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 23:17:45.427808  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 23:17:45.449382  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 23:17:45.471511  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:17:45.494029  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:17:45.516867  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:17:45.539869  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:17:45.562003  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:17:45.585242  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:17:45.608140  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:17:45.630134  420066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 23:17:45.646176  420066 ssh_runner.go:195] Run: openssl version
	I0108 23:17:45.652002  420066 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 23:17:45.652099  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:17:45.662945  420066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:17:45.667374  420066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:17:45.667619  420066 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:17:45.667680  420066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:17:45.672880  420066 command_runner.go:130] > 3ec20f2e
	I0108 23:17:45.673038  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:17:45.683426  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:17:45.694119  420066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:17:45.700670  420066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:17:45.700966  420066 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:17:45.701027  420066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:17:45.706881  420066 command_runner.go:130] > b5213941
	I0108 23:17:45.707150  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:17:45.719526  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:17:45.729970  420066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:17:45.734491  420066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:17:45.734854  420066 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:17:45.734912  420066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:17:45.740342  420066 command_runner.go:130] > 51391683
	I0108 23:17:45.740612  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:17:45.752239  420066 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:17:45.756830  420066 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:17:45.756884  420066 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:17:45.756940  420066 kubeadm.go:404] StartCluster: {Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:17:45.757047  420066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 23:17:45.757110  420066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:17:45.800238  420066 cri.go:89] found id: ""
	I0108 23:17:45.800334  420066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 23:17:45.811404  420066 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 23:17:45.811431  420066 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 23:17:45.811438  420066 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 23:17:45.811801  420066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 23:17:45.822587  420066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 23:17:45.832090  420066 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 23:17:45.832113  420066 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 23:17:45.832121  420066 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 23:17:45.832129  420066 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:17:45.832164  420066 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:17:45.832210  420066 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 23:17:46.197523  420066 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:17:46.197558  420066 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:17:58.666215  420066 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 23:17:58.666253  420066 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 23:17:58.666338  420066 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 23:17:58.666371  420066 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 23:17:58.666478  420066 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 23:17:58.666495  420066 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 23:17:58.666600  420066 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 23:17:58.666618  420066 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 23:17:58.666757  420066 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 23:17:58.666769  420066 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 23:17:58.666840  420066 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:17:58.668562  420066 out.go:204]   - Generating certificates and keys ...
	I0108 23:17:58.666894  420066 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:17:58.668692  420066 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 23:17:58.668704  420066 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 23:17:58.668785  420066 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 23:17:58.668794  420066 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 23:17:58.668879  420066 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 23:17:58.668892  420066 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 23:17:58.668969  420066 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 23:17:58.668977  420066 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 23:17:58.669061  420066 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 23:17:58.669077  420066 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 23:17:58.669154  420066 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 23:17:58.669164  420066 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 23:17:58.669235  420066 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 23:17:58.669245  420066 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 23:17:58.669363  420066 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-266395] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0108 23:17:58.669376  420066 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-266395] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0108 23:17:58.669445  420066 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 23:17:58.669453  420066 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 23:17:58.669583  420066 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-266395] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0108 23:17:58.669591  420066 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-266395] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0108 23:17:58.669673  420066 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 23:17:58.669682  420066 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 23:17:58.669772  420066 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 23:17:58.669780  420066 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 23:17:58.669839  420066 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 23:17:58.669847  420066 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 23:17:58.669950  420066 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:17:58.669977  420066 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:17:58.670040  420066 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:17:58.670051  420066 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:17:58.670112  420066 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:17:58.670126  420066 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:17:58.670209  420066 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:17:58.670218  420066 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:17:58.670278  420066 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:17:58.670284  420066 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:17:58.670373  420066 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:17:58.670383  420066 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:17:58.670471  420066 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:17:58.670493  420066 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:17:58.672186  420066 out.go:204]   - Booting up control plane ...
	I0108 23:17:58.672293  420066 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:17:58.672308  420066 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:17:58.672408  420066 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:17:58.672417  420066 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:17:58.672479  420066 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:17:58.672485  420066 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:17:58.672564  420066 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:17:58.672572  420066 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:17:58.672702  420066 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:17:58.672722  420066 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:17:58.672784  420066 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:17:58.672794  420066 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 23:17:58.672959  420066 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 23:17:58.672985  420066 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 23:17:58.673094  420066 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.005658 seconds
	I0108 23:17:58.673102  420066 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005658 seconds
	I0108 23:17:58.673188  420066 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 23:17:58.673196  420066 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 23:17:58.673324  420066 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 23:17:58.673331  420066 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 23:17:58.673394  420066 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 23:17:58.673403  420066 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 23:17:58.673621  420066 command_runner.go:130] > [mark-control-plane] Marking the node multinode-266395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 23:17:58.673628  420066 kubeadm.go:322] [mark-control-plane] Marking the node multinode-266395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 23:17:58.673687  420066 command_runner.go:130] > [bootstrap-token] Using token: dotg5o.c0ypcexewx2a4gcy
	I0108 23:17:58.673696  420066 kubeadm.go:322] [bootstrap-token] Using token: dotg5o.c0ypcexewx2a4gcy
	I0108 23:17:58.675272  420066 out.go:204]   - Configuring RBAC rules ...
	I0108 23:17:58.675396  420066 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 23:17:58.675408  420066 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 23:17:58.675518  420066 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 23:17:58.675540  420066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 23:17:58.675690  420066 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 23:17:58.675706  420066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 23:17:58.675820  420066 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 23:17:58.675832  420066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 23:17:58.675951  420066 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 23:17:58.675962  420066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 23:17:58.676049  420066 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 23:17:58.676063  420066 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 23:17:58.676189  420066 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 23:17:58.676197  420066 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 23:17:58.676231  420066 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 23:17:58.676236  420066 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 23:17:58.676272  420066 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 23:17:58.676277  420066 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 23:17:58.676281  420066 kubeadm.go:322] 
	I0108 23:17:58.676333  420066 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 23:17:58.676339  420066 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 23:17:58.676342  420066 kubeadm.go:322] 
	I0108 23:17:58.676423  420066 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 23:17:58.676435  420066 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 23:17:58.676441  420066 kubeadm.go:322] 
	I0108 23:17:58.676475  420066 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 23:17:58.676492  420066 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 23:17:58.676589  420066 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 23:17:58.676592  420066 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 23:17:58.676672  420066 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 23:17:58.676685  420066 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 23:17:58.676711  420066 kubeadm.go:322] 
	I0108 23:17:58.676783  420066 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 23:17:58.676793  420066 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 23:17:58.676799  420066 kubeadm.go:322] 
	I0108 23:17:58.676843  420066 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 23:17:58.676849  420066 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 23:17:58.676853  420066 kubeadm.go:322] 
	I0108 23:17:58.676922  420066 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 23:17:58.676933  420066 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 23:17:58.677033  420066 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 23:17:58.677054  420066 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 23:17:58.677109  420066 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 23:17:58.677114  420066 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 23:17:58.677118  420066 kubeadm.go:322] 
	I0108 23:17:58.677185  420066 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 23:17:58.677192  420066 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 23:17:58.677251  420066 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 23:17:58.677256  420066 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 23:17:58.677260  420066 kubeadm.go:322] 
	I0108 23:17:58.677329  420066 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token dotg5o.c0ypcexewx2a4gcy \
	I0108 23:17:58.677335  420066 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dotg5o.c0ypcexewx2a4gcy \
	I0108 23:17:58.677440  420066 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0108 23:17:58.677447  420066 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0108 23:17:58.677464  420066 command_runner.go:130] > 	--control-plane 
	I0108 23:17:58.677469  420066 kubeadm.go:322] 	--control-plane 
	I0108 23:17:58.677473  420066 kubeadm.go:322] 
	I0108 23:17:58.677543  420066 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 23:17:58.677550  420066 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 23:17:58.677554  420066 kubeadm.go:322] 
	I0108 23:17:58.677624  420066 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dotg5o.c0ypcexewx2a4gcy \
	I0108 23:17:58.677634  420066 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dotg5o.c0ypcexewx2a4gcy \
	I0108 23:17:58.677744  420066 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 23:17:58.677764  420066 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 23:17:58.677772  420066 cni.go:84] Creating CNI manager for ""
	I0108 23:17:58.677778  420066 cni.go:136] 1 nodes found, recommending kindnet
	I0108 23:17:58.679439  420066 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 23:17:58.681091  420066 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:17:58.713970  420066 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:17:58.714005  420066 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 23:17:58.714017  420066 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 23:17:58.714027  420066 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:17:58.714036  420066 command_runner.go:130] > Access: 2024-01-08 23:17:27.217891140 +0000
	I0108 23:17:58.714044  420066 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 23:17:58.714050  420066 command_runner.go:130] > Change: 2024-01-08 23:17:25.384891140 +0000
	I0108 23:17:58.714055  420066 command_runner.go:130] >  Birth: -
	I0108 23:17:58.714450  420066 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:17:58.714471  420066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:17:58.746168  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:17:59.800824  420066 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 23:17:59.807019  420066 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 23:17:59.825901  420066 command_runner.go:130] > serviceaccount/kindnet created
	I0108 23:17:59.841538  420066 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 23:17:59.844633  420066 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.098415421s)
	I0108 23:17:59.844706  420066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 23:17:59.844817  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:17:59.844835  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-266395 minikube.k8s.io/updated_at=2024_01_08T23_17_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:17:59.881665  420066 command_runner.go:130] > -16
	I0108 23:17:59.881726  420066 ops.go:34] apiserver oom_adj: -16
	I0108 23:18:00.061654  420066 command_runner.go:130] > node/multinode-266395 labeled
	I0108 23:18:00.061736  420066 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 23:18:00.061845  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:00.154669  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:00.562218  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:00.645260  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:01.062006  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:01.153775  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:01.562836  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:01.645602  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:02.062213  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:02.154014  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:02.561962  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:02.662094  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:03.062783  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:03.162031  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:03.562629  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:03.648776  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:04.061978  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:04.151523  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:04.562910  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:04.659787  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:05.062540  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:05.145230  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:05.562253  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:05.647313  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:06.062701  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:06.148970  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:06.562632  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:06.646000  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:07.062604  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:07.155676  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:07.562048  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:07.649201  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:08.062830  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:08.150321  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:08.561964  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:08.646537  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:09.062443  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:09.151929  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:09.562195  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:09.653407  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:10.062799  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:10.230675  420066 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 23:18:10.562065  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:10.660906  420066 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 23:18:10.660940  420066 command_runner.go:130] > default   0         0s
	I0108 23:18:10.662472  420066 kubeadm.go:1088] duration metric: took 10.817729238s to wait for elevateKubeSystemPrivileges.
	I0108 23:18:10.662507  420066 kubeadm.go:406] StartCluster complete in 24.905572017s
	I0108 23:18:10.662530  420066 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:18:10.662672  420066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:18:10.663398  420066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:18:10.663669  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 23:18:10.663823  420066 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 23:18:10.663904  420066 addons.go:69] Setting storage-provisioner=true in profile "multinode-266395"
	I0108 23:18:10.663925  420066 addons.go:69] Setting default-storageclass=true in profile "multinode-266395"
	I0108 23:18:10.663939  420066 addons.go:237] Setting addon storage-provisioner=true in "multinode-266395"
	I0108 23:18:10.663953  420066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-266395"
	I0108 23:18:10.663975  420066 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:18:10.664011  420066 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:18:10.664042  420066 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:18:10.664433  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:10.664453  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:10.664465  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:10.664475  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:10.664405  420066 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:18:10.665193  420066 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 23:18:10.665485  420066 round_trippers.go:463] GET https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:18:10.665499  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:10.665510  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:10.665520  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:10.681596  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0108 23:18:10.682137  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:10.682732  420066 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0108 23:18:10.682757  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:10.682767  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:10.682776  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:10.682783  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:10.682791  420066 round_trippers.go:580]     Content-Length: 291
	I0108 23:18:10.682798  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:10 GMT
	I0108 23:18:10.682807  420066 round_trippers.go:580]     Audit-Id: ed1a62ad-5bca-4966-9879-961c943ec0b0
	I0108 23:18:10.682814  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:10.682890  420066 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"235","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:18:10.682901  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:10.682920  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:10.683281  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:10.683419  420066 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"235","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:18:10.683504  420066 round_trippers.go:463] PUT https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:18:10.683519  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:10.683530  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:10.683535  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0108 23:18:10.683540  420066 round_trippers.go:473]     Content-Type: application/json
	I0108 23:18:10.683549  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:10.683874  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:10.683925  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:10.683988  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:10.684538  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:10.684566  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:10.684924  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:10.685080  420066 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:18:10.687006  420066 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:18:10.687236  420066 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:18:10.687515  420066 addons.go:237] Setting addon default-storageclass=true in "multinode-266395"
	I0108 23:18:10.687564  420066 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:18:10.687887  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:10.687920  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:10.694960  420066 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 23:18:10.694987  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:10.694997  420066 round_trippers.go:580]     Audit-Id: 8c4c23a3-d8f3-4379-b085-d75cba222369
	I0108 23:18:10.695007  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:10.695016  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:10.695028  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:10.695040  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:10.695053  420066 round_trippers.go:580]     Content-Length: 291
	I0108 23:18:10.695065  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:10 GMT
	I0108 23:18:10.695497  420066 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"313","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:18:10.700266  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0108 23:18:10.700712  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:10.701259  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:10.701283  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:10.702734  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0108 23:18:10.702994  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:10.703143  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:10.703246  420066 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:18:10.703643  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:10.703672  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:10.704043  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:10.704545  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:10.704587  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:10.705146  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:18:10.707573  420066 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:18:10.709490  420066 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:18:10.709513  420066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 23:18:10.709534  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:18:10.712672  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:18:10.713147  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:18:10.713199  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:18:10.713359  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:18:10.713567  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:18:10.713755  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:18:10.713911  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:18:10.721306  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I0108 23:18:10.721726  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:10.722229  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:10.722260  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:10.722560  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:10.722761  420066 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:18:10.724368  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:18:10.724652  420066 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 23:18:10.724672  420066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 23:18:10.724694  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:18:10.727214  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:18:10.727707  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:18:10.727740  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:18:10.727882  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:18:10.728066  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:18:10.728246  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:18:10.728371  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:18:10.825065  420066 command_runner.go:130] > apiVersion: v1
	I0108 23:18:10.825111  420066 command_runner.go:130] > data:
	I0108 23:18:10.825118  420066 command_runner.go:130] >   Corefile: |
	I0108 23:18:10.825129  420066 command_runner.go:130] >     .:53 {
	I0108 23:18:10.825135  420066 command_runner.go:130] >         errors
	I0108 23:18:10.825144  420066 command_runner.go:130] >         health {
	I0108 23:18:10.825151  420066 command_runner.go:130] >            lameduck 5s
	I0108 23:18:10.825157  420066 command_runner.go:130] >         }
	I0108 23:18:10.825164  420066 command_runner.go:130] >         ready
	I0108 23:18:10.825173  420066 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 23:18:10.825182  420066 command_runner.go:130] >            pods insecure
	I0108 23:18:10.825191  420066 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 23:18:10.825202  420066 command_runner.go:130] >            ttl 30
	I0108 23:18:10.825209  420066 command_runner.go:130] >         }
	I0108 23:18:10.825217  420066 command_runner.go:130] >         prometheus :9153
	I0108 23:18:10.825225  420066 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 23:18:10.825235  420066 command_runner.go:130] >            max_concurrent 1000
	I0108 23:18:10.825276  420066 command_runner.go:130] >         }
	I0108 23:18:10.825293  420066 command_runner.go:130] >         cache 30
	I0108 23:18:10.825299  420066 command_runner.go:130] >         loop
	I0108 23:18:10.825305  420066 command_runner.go:130] >         reload
	I0108 23:18:10.825310  420066 command_runner.go:130] >         loadbalance
	I0108 23:18:10.825317  420066 command_runner.go:130] >     }
	I0108 23:18:10.825321  420066 command_runner.go:130] > kind: ConfigMap
	I0108 23:18:10.825324  420066 command_runner.go:130] > metadata:
	I0108 23:18:10.825337  420066 command_runner.go:130] >   creationTimestamp: "2024-01-08T23:17:58Z"
	I0108 23:18:10.825344  420066 command_runner.go:130] >   name: coredns
	I0108 23:18:10.825348  420066 command_runner.go:130] >   namespace: kube-system
	I0108 23:18:10.825354  420066 command_runner.go:130] >   resourceVersion: "231"
	I0108 23:18:10.825359  420066 command_runner.go:130] >   uid: 46dcdfb1-d486-4d04-9672-97a7f8a58bba
	I0108 23:18:10.825476  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 23:18:10.903299  420066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:18:10.929444  420066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 23:18:11.165855  420066 round_trippers.go:463] GET https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:18:11.165890  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:11.165904  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:11.165914  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:11.258332  420066 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I0108 23:18:11.258357  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:11.258364  420066 round_trippers.go:580]     Content-Length: 291
	I0108 23:18:11.258370  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:11 GMT
	I0108 23:18:11.258378  420066 round_trippers.go:580]     Audit-Id: b08e0452-78de-42a3-ade0-858c3ea19300
	I0108 23:18:11.258385  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:11.258393  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:11.258400  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:11.258408  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:11.265691  420066 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"331","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 23:18:11.265835  420066 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266395" context rescaled to 1 replicas
	I0108 23:18:11.265876  420066 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:18:11.267902  420066 out.go:177] * Verifying Kubernetes components...
	I0108 23:18:11.269636  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:18:11.536440  420066 command_runner.go:130] > configmap/coredns replaced
	I0108 23:18:11.539310  420066 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 23:18:11.861892  420066 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 23:18:11.861925  420066 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 23:18:11.861936  420066 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 23:18:11.861946  420066 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 23:18:11.861953  420066 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 23:18:11.861967  420066 command_runner.go:130] > pod/storage-provisioner created
	I0108 23:18:11.862018  420066 main.go:141] libmachine: Making call to close driver server
	I0108 23:18:11.862034  420066 main.go:141] libmachine: (multinode-266395) Calling .Close
	I0108 23:18:11.862045  420066 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 23:18:11.862095  420066 main.go:141] libmachine: Making call to close driver server
	I0108 23:18:11.862119  420066 main.go:141] libmachine: (multinode-266395) Calling .Close
	I0108 23:18:11.862408  420066 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:18:11.862421  420066 main.go:141] libmachine: (multinode-266395) DBG | Closing plugin on server side
	I0108 23:18:11.862427  420066 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:18:11.862438  420066 main.go:141] libmachine: Making call to close driver server
	I0108 23:18:11.862450  420066 main.go:141] libmachine: (multinode-266395) Calling .Close
	I0108 23:18:11.862463  420066 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:18:11.862472  420066 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:18:11.862487  420066 main.go:141] libmachine: Making call to close driver server
	I0108 23:18:11.862495  420066 main.go:141] libmachine: (multinode-266395) Calling .Close
	I0108 23:18:11.862677  420066 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:18:11.862737  420066 main.go:141] libmachine: (multinode-266395) DBG | Closing plugin on server side
	I0108 23:18:11.862774  420066 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:18:11.862788  420066 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:18:11.862862  420066 main.go:141] libmachine: (multinode-266395) DBG | Closing plugin on server side
	I0108 23:18:11.862895  420066 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:18:11.862919  420066 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:18:11.863043  420066 round_trippers.go:463] GET https://192.168.39.18:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 23:18:11.863054  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:11.863064  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:11.863073  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:11.863109  420066 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:18:11.863475  420066 node_ready.go:35] waiting up to 6m0s for node "multinode-266395" to be "Ready" ...
	I0108 23:18:11.863608  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:11.863620  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:11.863631  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:11.863638  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:11.877941  420066 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0108 23:18:11.877962  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:11.877971  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:11.877978  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:11.877985  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:11.877992  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:11 GMT
	I0108 23:18:11.878000  420066 round_trippers.go:580]     Audit-Id: 679f27be-d899-4a19-a03c-41a754b04367
	I0108 23:18:11.878008  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:11.880165  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:11.881415  420066 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0108 23:18:11.881434  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:11.881451  420066 round_trippers.go:580]     Audit-Id: 847595f2-9372-455c-ba6b-8087f4665837
	I0108 23:18:11.881464  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:11.881477  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:11.881489  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:11.881502  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:11.881514  420066 round_trippers.go:580]     Content-Length: 1273
	I0108 23:18:11.881526  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:11 GMT
	I0108 23:18:11.881622  420066 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"372"},"items":[{"metadata":{"name":"standard","uid":"587d92b2-8823-4d6f-9f6c-ef4eeb52fd55","resourceVersion":"359","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 23:18:11.882004  420066 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"587d92b2-8823-4d6f-9f6c-ef4eeb52fd55","resourceVersion":"359","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 23:18:11.882066  420066 round_trippers.go:463] PUT https://192.168.39.18:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 23:18:11.882079  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:11.882090  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:11.882102  420066 round_trippers.go:473]     Content-Type: application/json
	I0108 23:18:11.882115  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:11.890584  420066 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 23:18:11.890611  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:11.890621  420066 round_trippers.go:580]     Audit-Id: 0d9a65cf-8180-4c76-8ff2-af70878dea51
	I0108 23:18:11.890628  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:11.890637  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:11.890645  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:11.890656  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:11.890666  420066 round_trippers.go:580]     Content-Length: 1220
	I0108 23:18:11.890678  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:11 GMT
	I0108 23:18:11.890717  420066 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"587d92b2-8823-4d6f-9f6c-ef4eeb52fd55","resourceVersion":"359","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 23:18:11.890866  420066 main.go:141] libmachine: Making call to close driver server
	I0108 23:18:11.890884  420066 main.go:141] libmachine: (multinode-266395) Calling .Close
	I0108 23:18:11.891191  420066 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:18:11.891214  420066 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:18:11.893220  420066 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 23:18:11.894583  420066 addons.go:508] enable addons completed in 1.230765472s: enabled=[storage-provisioner default-storageclass]
	I0108 23:18:12.364394  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:12.364420  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:12.364429  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:12.364435  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:12.367278  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:12.367298  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:12.367306  420066 round_trippers.go:580]     Audit-Id: b7cbf808-a6ad-4c92-8fab-f7bc7c07b0cc
	I0108 23:18:12.367311  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:12.367316  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:12.367322  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:12.367328  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:12.367335  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:12 GMT
	I0108 23:18:12.368027  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:12.863698  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:12.863728  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:12.863737  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:12.863744  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:12.867324  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:12.867346  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:12.867353  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:12.867383  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:12.867392  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:12.867400  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:12 GMT
	I0108 23:18:12.867408  420066 round_trippers.go:580]     Audit-Id: 9c23872d-1f3d-4289-90f1-9fe3e5c9cb0f
	I0108 23:18:12.867420  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:12.868180  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:13.363855  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:13.363885  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:13.363893  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:13.363899  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:13.366810  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:13.366837  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:13.366851  420066 round_trippers.go:580]     Audit-Id: a2e4146b-54e0-49c6-9671-4ca156fecfc4
	I0108 23:18:13.366860  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:13.366869  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:13.366878  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:13.366887  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:13.366897  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:13 GMT
	I0108 23:18:13.367166  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:13.864125  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:13.864164  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:13.864176  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:13.864186  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:13.869810  420066 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 23:18:13.869833  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:13.869843  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:13.869851  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:13.869859  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:13 GMT
	I0108 23:18:13.869866  420066 round_trippers.go:580]     Audit-Id: fdb180c7-8fda-4785-b047-6d2fcb1a566c
	I0108 23:18:13.869874  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:13.869883  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:13.870116  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:13.870481  420066 node_ready.go:58] node "multinode-266395" has status "Ready":"False"
	I0108 23:18:14.363798  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:14.363827  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:14.363835  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:14.363845  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:14.366358  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:14.366380  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:14.366387  420066 round_trippers.go:580]     Audit-Id: 2551182c-2b94-44a6-a688-c4cb9727e0fb
	I0108 23:18:14.366392  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:14.366397  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:14.366403  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:14.366411  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:14.366419  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:14 GMT
	I0108 23:18:14.366717  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:14.864495  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:14.864526  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:14.864535  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:14.864540  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:14.867259  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:14.867283  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:14.867290  420066 round_trippers.go:580]     Audit-Id: b25e19e8-df59-40ac-8d8d-48de168a7375
	I0108 23:18:14.867296  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:14.867300  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:14.867305  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:14.867310  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:14.867315  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:14 GMT
	I0108 23:18:14.867797  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:15.364561  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:15.364606  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:15.364618  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:15.364628  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:15.367793  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:15.367820  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:15.367831  420066 round_trippers.go:580]     Audit-Id: 38696ecd-19ec-40f5-87e8-b0ae8871972d
	I0108 23:18:15.367840  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:15.367849  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:15.367858  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:15.367865  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:15.367873  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:15 GMT
	I0108 23:18:15.367987  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:15.864311  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:15.864338  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:15.864346  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:15.864352  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:15.867223  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:15.867247  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:15.867255  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:15.867263  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:15.867269  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:15.867277  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:15.867289  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:15 GMT
	I0108 23:18:15.867300  420066 round_trippers.go:580]     Audit-Id: 8441e82e-f824-45f2-b5e3-7f3346ec6182
	I0108 23:18:15.867636  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:16.364358  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:16.364386  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:16.364394  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:16.364400  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:16.366974  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:16.366995  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:16.367002  420066 round_trippers.go:580]     Audit-Id: b6525147-92db-4cf0-8e16-458b5870df0a
	I0108 23:18:16.367008  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:16.367013  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:16.367021  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:16.367029  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:16.367042  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:16 GMT
	I0108 23:18:16.367276  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:16.367653  420066 node_ready.go:58] node "multinode-266395" has status "Ready":"False"
	I0108 23:18:16.863938  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:16.863961  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:16.863973  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:16.863979  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:16.867312  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:16.867337  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:16.867347  420066 round_trippers.go:580]     Audit-Id: 72153b80-d44a-408f-abd8-f3b29dc980dd
	I0108 23:18:16.867366  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:16.867375  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:16.867384  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:16.867393  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:16.867406  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:16 GMT
	I0108 23:18:16.867655  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:17.364385  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:17.364413  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:17.364422  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:17.364428  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:17.367058  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:17.367079  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:17.367092  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:17.367098  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:17 GMT
	I0108 23:18:17.367103  420066 round_trippers.go:580]     Audit-Id: f07c3125-3e42-4da3-af9a-9ae3a75cbc7a
	I0108 23:18:17.367108  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:17.367113  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:17.367118  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:17.367524  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"319","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0108 23:18:17.863769  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:17.863796  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:17.863807  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:17.863815  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:17.869050  420066 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 23:18:17.869072  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:17.869080  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:17.869086  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:17 GMT
	I0108 23:18:17.869091  420066 round_trippers.go:580]     Audit-Id: e9a15c0e-7f99-435d-85cf-ae5a2dc5b8bb
	I0108 23:18:17.869096  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:17.869101  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:17.869106  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:17.869303  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:17.869637  420066 node_ready.go:49] node "multinode-266395" has status "Ready":"True"
	I0108 23:18:17.869658  420066 node_ready.go:38] duration metric: took 6.006144061s waiting for node "multinode-266395" to be "Ready" ...
	I0108 23:18:17.869672  420066 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:18:17.869778  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:18:17.869789  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:17.869800  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:17.869810  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:17.873512  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:17.873536  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:17.873549  420066 round_trippers.go:580]     Audit-Id: e29e5235-7213-429e-976f-9106c736346c
	I0108 23:18:17.873556  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:17.873564  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:17.873571  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:17.873578  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:17.873590  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:17 GMT
	I0108 23:18:17.874838  420066 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"394","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54778 chars]
	I0108 23:18:17.877993  420066 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:17.878067  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:18:17.878082  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:17.878093  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:17.878102  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:17.880142  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:17.880157  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:17.880163  420066 round_trippers.go:580]     Audit-Id: a87e597c-a874-4518-b887-3c8ccd25ad61
	I0108 23:18:17.880168  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:17.880176  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:17.880181  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:17.880189  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:17.880194  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:17 GMT
	I0108 23:18:17.880565  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"394","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 23:18:17.880958  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:17.880971  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:17.880978  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:17.880984  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:17.883048  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:17.883068  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:17.883082  420066 round_trippers.go:580]     Audit-Id: bfd5e62d-77d3-4011-b8a4-82d1b6d9c5fb
	I0108 23:18:17.883091  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:17.883099  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:17.883107  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:17.883115  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:17.883124  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:17 GMT
	I0108 23:18:17.883641  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:18.378621  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:18:18.378656  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:18.378668  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:18.378678  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:18.382072  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:18.382099  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:18.382109  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:18.382117  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:18.382125  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:18 GMT
	I0108 23:18:18.382133  420066 round_trippers.go:580]     Audit-Id: 023c9d99-9f1a-47ab-b555-17cc0c358bcf
	I0108 23:18:18.382140  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:18.382148  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:18.382366  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"394","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 23:18:18.382998  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:18.383021  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:18.383032  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:18.383041  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:18.385912  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:18.385933  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:18.385943  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:18.385955  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:18 GMT
	I0108 23:18:18.385963  420066 round_trippers.go:580]     Audit-Id: 95f7ea98-c0a8-4a18-8f92-0677133b9630
	I0108 23:18:18.385971  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:18.385980  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:18.385989  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:18.386386  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:18.879027  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:18:18.879060  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:18.879073  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:18.879083  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:18.913764  420066 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0108 23:18:18.913799  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:18.913811  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:18.913819  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:18.913827  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:18.913835  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:18.913843  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:18 GMT
	I0108 23:18:18.913851  420066 round_trippers.go:580]     Audit-Id: 11834300-d6ba-46e1-b418-beadfd45c969
	I0108 23:18:18.914085  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"394","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 23:18:18.914699  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:18.914720  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:18.914732  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:18.914741  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:18.949505  420066 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0108 23:18:18.949548  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:18.949559  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:18.949567  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:18 GMT
	I0108 23:18:18.949574  420066 round_trippers.go:580]     Audit-Id: 0bc69d46-e6bf-4f02-87a6-508114b5ef1f
	I0108 23:18:18.949581  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:18.949589  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:18.949597  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:18.950130  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:19.378784  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:18:19.378813  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:19.378822  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:19.378828  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:19.382520  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:19.382547  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:19.382557  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:19 GMT
	I0108 23:18:19.382564  420066 round_trippers.go:580]     Audit-Id: 4de6e654-37d0-43cd-8f9d-cb79bdee4bac
	I0108 23:18:19.382571  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:19.382580  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:19.382587  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:19.382603  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:19.382861  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"394","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 23:18:19.383430  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:19.383447  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:19.383454  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:19.383460  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:19.386206  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:19.386228  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:19.386239  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:19.386247  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:19 GMT
	I0108 23:18:19.386255  420066 round_trippers.go:580]     Audit-Id: 4614aa60-79ed-42ad-963f-db2a76e02381
	I0108 23:18:19.386263  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:19.386270  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:19.386284  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:19.386469  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:19.879251  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:18:19.879282  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:19.879291  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:19.879297  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:19.884125  420066 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:18:19.884154  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:19.884166  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:19.884175  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:19.884183  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:19.884189  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:19 GMT
	I0108 23:18:19.884194  420066 round_trippers.go:580]     Audit-Id: d8b6a36c-ad9d-48db-915c-d564adf3b5b6
	I0108 23:18:19.884199  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:19.884401  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"394","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0108 23:18:19.884888  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:19.884901  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:19.884909  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:19.884916  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:19.891808  420066 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 23:18:19.891823  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:19.891832  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:19.891841  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:19.891850  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:19 GMT
	I0108 23:18:19.891859  420066 round_trippers.go:580]     Audit-Id: 044b99d8-bd46-46c3-bfc0-a8c5e2e7acc3
	I0108 23:18:19.891867  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:19.891872  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:19.892759  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:19.893120  420066 pod_ready.go:102] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"False"
	I0108 23:18:20.378452  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:18:20.378477  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.378486  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.378492  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.381547  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:20.381575  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.381587  420066 round_trippers.go:580]     Audit-Id: 0796e16e-6fa2-4b41-bcb2-64c8bb65bfdf
	I0108 23:18:20.381598  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.381604  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.381609  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.381614  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.381619  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.382104  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"414","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0108 23:18:20.382661  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.382683  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.382694  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.382704  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.384998  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:20.385020  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.385026  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.385032  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.385037  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.385042  420066 round_trippers.go:580]     Audit-Id: 72772194-3e1b-4f93-b4ad-1a686e1096d6
	I0108 23:18:20.385047  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.385051  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.385726  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:20.386013  420066 pod_ready.go:92] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:18:20.386028  420066 pod_ready.go:81] duration metric: took 2.508011532s waiting for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.386037  420066 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.386091  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266395
	I0108 23:18:20.386098  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.386105  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.386111  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.388228  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:20.388247  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.388253  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.388258  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.388263  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.388268  420066 round_trippers.go:580]     Audit-Id: 88e4b701-c254-430e-890b-389fe3e0d584
	I0108 23:18:20.388273  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.388281  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.388489  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"400","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0108 23:18:20.388839  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.388849  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.388855  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.388861  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.390805  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.390821  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.390827  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.390833  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.390838  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.390843  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.390848  420066 round_trippers.go:580]     Audit-Id: 9a933d0c-d716-4903-986a-276ae9900a0f
	I0108 23:18:20.390854  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.391003  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:20.391264  420066 pod_ready.go:92] pod "etcd-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:18:20.391277  420066 pod_ready.go:81] duration metric: took 5.234394ms waiting for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.391288  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.391341  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266395
	I0108 23:18:20.391348  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.391355  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.391378  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.393644  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:20.393660  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.393666  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.393673  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.393678  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.393684  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.393690  420066 round_trippers.go:580]     Audit-Id: 5372b0da-bdd2-47d9-85ed-06154532c84a
	I0108 23:18:20.393695  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.393847  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266395","namespace":"kube-system","uid":"70b0f39e-3999-4a5b-bae6-c08ae2adeb49","resourceVersion":"401","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.18:8443","kubernetes.io/config.hash":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.mirror":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.seen":"2024-01-08T23:17:58.693588503Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0108 23:18:20.394182  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.394193  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.394200  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.394206  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.395932  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.395948  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.395954  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.395959  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.395964  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.395969  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.395974  420066 round_trippers.go:580]     Audit-Id: 72889379-aaed-46d1-b426-bc690861c1d2
	I0108 23:18:20.395981  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.396250  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:20.396493  420066 pod_ready.go:92] pod "kube-apiserver-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:18:20.396505  420066 pod_ready.go:81] duration metric: took 5.211399ms waiting for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.396513  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.396556  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266395
	I0108 23:18:20.396563  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.396570  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.396575  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.398312  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.398328  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.398334  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.398339  420066 round_trippers.go:580]     Audit-Id: d61d5217-20ad-42c6-bec3-bb57c5e67a16
	I0108 23:18:20.398344  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.398349  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.398355  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.398363  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.398508  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266395","namespace":"kube-system","uid":"32b7c02b-f69c-46ac-ab67-d61a4077b5b2","resourceVersion":"403","creationTimestamp":"2024-01-08T23:17:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.mirror":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.seen":"2024-01-08T23:17:49.571485221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0108 23:18:20.398817  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.398829  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.398836  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.398842  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.400853  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.400872  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.400881  420066 round_trippers.go:580]     Audit-Id: 1803f1ab-510e-4ff3-bbb4-d3e7da024c69
	I0108 23:18:20.400888  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.400896  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.400909  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.400918  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.400923  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.401355  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:20.401598  420066 pod_ready.go:92] pod "kube-controller-manager-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:18:20.401611  420066 pod_ready.go:81] duration metric: took 5.091862ms waiting for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.401619  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.401656  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:18:20.401663  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.401670  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.401675  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.403391  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.403409  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.403417  420066 round_trippers.go:580]     Audit-Id: 40020180-6acb-4684-a5fa-55e517afb3a2
	I0108 23:18:20.403422  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.403428  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.403433  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.403441  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.403446  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.403701  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lvmgf","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c37677d-6832-4d6b-8f29-c23d25347535","resourceVersion":"379","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0108 23:18:20.404023  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.404034  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.404040  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.404046  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.405842  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.405859  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.405866  420066 round_trippers.go:580]     Audit-Id: 7577857d-fe26-492f-a1a3-fc77f1375840
	I0108 23:18:20.405871  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.405876  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.405881  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.405893  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.405901  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.406160  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:20.406399  420066 pod_ready.go:92] pod "kube-proxy-lvmgf" in "kube-system" namespace has status "Ready":"True"
	I0108 23:18:20.406410  420066 pod_ready.go:81] duration metric: took 4.787098ms waiting for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.406417  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.578882  420066 request.go:629] Waited for 172.385498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:18:20.578968  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:18:20.578973  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.578980  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.578986  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.581716  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:20.581740  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.581747  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.581755  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.581763  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.581771  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.581777  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.581783  420066 round_trippers.go:580]     Audit-Id: 67e744e4-4e2b-4990-8f5b-864213251120
	I0108 23:18:20.581979  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266395","namespace":"kube-system","uid":"df5e2822-435f-4264-854b-929b6acccd99","resourceVersion":"402","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.mirror":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.seen":"2024-01-08T23:17:58.693594221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0108 23:18:20.779126  420066 request.go:629] Waited for 196.729776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.779202  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:18:20.779208  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.779220  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.779229  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.784368  420066 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 23:18:20.784392  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.784399  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.784404  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.784411  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.784416  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.784428  420066 round_trippers.go:580]     Audit-Id: 4d1f1734-e066-4ea7-b26a-dd9b835be67d
	I0108 23:18:20.784439  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.784676  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:18:20.785124  420066 pod_ready.go:92] pod "kube-scheduler-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:18:20.785153  420066 pod_ready.go:81] duration metric: took 378.729707ms waiting for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:18:20.785164  420066 pod_ready.go:38] duration metric: took 2.915452441s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:18:20.785185  420066 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:18:20.785245  420066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:18:20.802263  420066 command_runner.go:130] > 1069
	I0108 23:18:20.802298  420066 api_server.go:72] duration metric: took 9.536383614s to wait for apiserver process to appear ...
	I0108 23:18:20.802306  420066 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:18:20.802336  420066 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:18:20.807219  420066 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0108 23:18:20.807304  420066 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0108 23:18:20.807312  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.807320  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.807328  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.808645  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:18:20.808660  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.808668  420066 round_trippers.go:580]     Content-Length: 264
	I0108 23:18:20.808673  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.808681  420066 round_trippers.go:580]     Audit-Id: 0bbd49f8-b37d-4d07-a5eb-5eae9eb19ecb
	I0108 23:18:20.808686  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.808693  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.808698  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.808706  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.808723  420066 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 23:18:20.808801  420066 api_server.go:141] control plane version: v1.28.4
	I0108 23:18:20.808817  420066 api_server.go:131] duration metric: took 6.499106ms to wait for apiserver health ...
	I0108 23:18:20.808825  420066 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:18:20.979166  420066 request.go:629] Waited for 170.27034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:18:20.979248  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:18:20.979255  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:20.979264  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:20.979273  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:20.982355  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:20.982374  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:20.982381  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:20 GMT
	I0108 23:18:20.982386  420066 round_trippers.go:580]     Audit-Id: 6c7b99e5-d43f-45f8-86b3-caf8105269f2
	I0108 23:18:20.982391  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:20.982451  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:20.982465  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:20.982470  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:20.983689  420066 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"414","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0108 23:18:20.985333  420066 system_pods.go:59] 8 kube-system pods found
	I0108 23:18:20.985369  420066 system_pods.go:61] "coredns-5dd5756b68-r8pvw" [5300c187-4f1f-4330-ae19-6bf2855763f2] Running
	I0108 23:18:20.985375  420066 system_pods.go:61] "etcd-multinode-266395" [ad57572e-a901-4042-b907-d0738c803dbd] Running
	I0108 23:18:20.985379  420066 system_pods.go:61] "kindnet-mnltq" [c65752e0-cd30-49cf-9645-5befeecc3d34] Running
	I0108 23:18:20.985383  420066 system_pods.go:61] "kube-apiserver-multinode-266395" [70b0f39e-3999-4a5b-bae6-c08ae2adeb49] Running
	I0108 23:18:20.985388  420066 system_pods.go:61] "kube-controller-manager-multinode-266395" [32b7c02b-f69c-46ac-ab67-d61a4077b5b2] Running
	I0108 23:18:20.985391  420066 system_pods.go:61] "kube-proxy-lvmgf" [9c37677d-6832-4d6b-8f29-c23d25347535] Running
	I0108 23:18:20.985395  420066 system_pods.go:61] "kube-scheduler-multinode-266395" [df5e2822-435f-4264-854b-929b6acccd99] Running
	I0108 23:18:20.985398  420066 system_pods.go:61] "storage-provisioner" [f15dcd0d-59b5-4f16-94c7-425f162c60ad] Running
	I0108 23:18:20.985404  420066 system_pods.go:74] duration metric: took 176.571665ms to wait for pod list to return data ...
	I0108 23:18:20.985411  420066 default_sa.go:34] waiting for default service account to be created ...
	I0108 23:18:21.178888  420066 request.go:629] Waited for 193.380997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:18:21.178980  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:18:21.178989  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:21.179001  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:21.179009  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:21.181697  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:21.181717  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:21.181725  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:21.181730  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:21.181736  420066 round_trippers.go:580]     Content-Length: 261
	I0108 23:18:21.181741  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:21 GMT
	I0108 23:18:21.181749  420066 round_trippers.go:580]     Audit-Id: 56bd1d57-c469-4252-a9c2-ce74f4bf6a5d
	I0108 23:18:21.181754  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:21.181759  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:21.181782  420066 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fef9e48e-c368-4659-9859-1571562fbbc8","resourceVersion":"295","creationTimestamp":"2024-01-08T23:18:10Z"}}]}
	I0108 23:18:21.181965  420066 default_sa.go:45] found service account: "default"
	I0108 23:18:21.181982  420066 default_sa.go:55] duration metric: took 196.565612ms for default service account to be created ...
	I0108 23:18:21.181996  420066 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 23:18:21.379449  420066 request.go:629] Waited for 197.379393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:18:21.379528  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:18:21.379533  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:21.379541  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:21.379548  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:21.383068  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:21.383090  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:21.383097  420066 round_trippers.go:580]     Audit-Id: fdfdd26d-a423-48f1-a383-248071a877b2
	I0108 23:18:21.383103  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:21.383107  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:21.383113  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:21.383118  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:21.383125  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:21 GMT
	I0108 23:18:21.384441  420066 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"414","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0108 23:18:21.386068  420066 system_pods.go:86] 8 kube-system pods found
	I0108 23:18:21.386087  420066 system_pods.go:89] "coredns-5dd5756b68-r8pvw" [5300c187-4f1f-4330-ae19-6bf2855763f2] Running
	I0108 23:18:21.386092  420066 system_pods.go:89] "etcd-multinode-266395" [ad57572e-a901-4042-b907-d0738c803dbd] Running
	I0108 23:18:21.386096  420066 system_pods.go:89] "kindnet-mnltq" [c65752e0-cd30-49cf-9645-5befeecc3d34] Running
	I0108 23:18:21.386100  420066 system_pods.go:89] "kube-apiserver-multinode-266395" [70b0f39e-3999-4a5b-bae6-c08ae2adeb49] Running
	I0108 23:18:21.386104  420066 system_pods.go:89] "kube-controller-manager-multinode-266395" [32b7c02b-f69c-46ac-ab67-d61a4077b5b2] Running
	I0108 23:18:21.386108  420066 system_pods.go:89] "kube-proxy-lvmgf" [9c37677d-6832-4d6b-8f29-c23d25347535] Running
	I0108 23:18:21.386112  420066 system_pods.go:89] "kube-scheduler-multinode-266395" [df5e2822-435f-4264-854b-929b6acccd99] Running
	I0108 23:18:21.386116  420066 system_pods.go:89] "storage-provisioner" [f15dcd0d-59b5-4f16-94c7-425f162c60ad] Running
	I0108 23:18:21.386122  420066 system_pods.go:126] duration metric: took 204.120023ms to wait for k8s-apps to be running ...
	I0108 23:18:21.386131  420066 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:18:21.386179  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:18:21.402636  420066 system_svc.go:56] duration metric: took 16.496011ms WaitForService to wait for kubelet.
	I0108 23:18:21.402673  420066 kubeadm.go:581] duration metric: took 10.136758585s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:18:21.402693  420066 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:18:21.579184  420066 request.go:629] Waited for 176.364055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0108 23:18:21.579268  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0108 23:18:21.579275  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:21.579284  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:21.579291  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:21.582198  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:21.582228  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:21.582240  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:21.582254  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:21.582263  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:21.582272  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:21.582280  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:21 GMT
	I0108 23:18:21.582289  420066 round_trippers.go:580]     Audit-Id: 0f64fd6a-efa0-4392-ad8f-246911b921db
	I0108 23:18:21.582525  420066 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0108 23:18:21.583110  420066 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:18:21.583154  420066 node_conditions.go:123] node cpu capacity is 2
	I0108 23:18:21.583169  420066 node_conditions.go:105] duration metric: took 180.47125ms to run NodePressure ...
	I0108 23:18:21.583198  420066 start.go:228] waiting for startup goroutines ...
	I0108 23:18:21.583211  420066 start.go:233] waiting for cluster config update ...
	I0108 23:18:21.583226  420066 start.go:242] writing updated cluster config ...
	I0108 23:18:21.585769  420066 out.go:177] 
	I0108 23:18:21.587210  420066 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:18:21.587276  420066 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:18:21.589097  420066 out.go:177] * Starting worker node multinode-266395-m02 in cluster multinode-266395
	I0108 23:18:21.590464  420066 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:18:21.590483  420066 cache.go:56] Caching tarball of preloaded images
	I0108 23:18:21.590561  420066 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:18:21.590572  420066 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:18:21.590632  420066 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:18:21.590766  420066 start.go:365] acquiring machines lock for multinode-266395-m02: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:18:21.590802  420066 start.go:369] acquired machines lock for "multinode-266395-m02" in 18.835µs
	I0108 23:18:21.590819  420066 start.go:93] Provisioning new machine with config: &{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:18:21.590879  420066 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0108 23:18:21.592507  420066 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 23:18:21.592585  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:21.592614  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:21.607138  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0108 23:18:21.607664  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:21.608176  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:21.608199  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:21.608520  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:21.608712  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:18:21.608863  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:21.609003  420066 start.go:159] libmachine.API.Create for "multinode-266395" (driver="kvm2")
	I0108 23:18:21.609046  420066 client.go:168] LocalClient.Create starting
	I0108 23:18:21.609071  420066 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0108 23:18:21.609100  420066 main.go:141] libmachine: Decoding PEM data...
	I0108 23:18:21.609115  420066 main.go:141] libmachine: Parsing certificate...
	I0108 23:18:21.609169  420066 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0108 23:18:21.609205  420066 main.go:141] libmachine: Decoding PEM data...
	I0108 23:18:21.609214  420066 main.go:141] libmachine: Parsing certificate...
	I0108 23:18:21.609231  420066 main.go:141] libmachine: Running pre-create checks...
	I0108 23:18:21.609239  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .PreCreateCheck
	I0108 23:18:21.609419  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetConfigRaw
	I0108 23:18:21.609761  420066 main.go:141] libmachine: Creating machine...
	I0108 23:18:21.609776  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .Create
	I0108 23:18:21.609902  420066 main.go:141] libmachine: (multinode-266395-m02) Creating KVM machine...
	I0108 23:18:21.611194  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found existing default KVM network
	I0108 23:18:21.611318  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found existing private KVM network mk-multinode-266395
	I0108 23:18:21.611466  420066 main.go:141] libmachine: (multinode-266395-m02) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02 ...
	I0108 23:18:21.611495  420066 main.go:141] libmachine: (multinode-266395-m02) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 23:18:21.611563  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:21.611447  420885 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:18:21.611669  420066 main.go:141] libmachine: (multinode-266395-m02) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 23:18:21.864459  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:21.864299  420885 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa...
	I0108 23:18:21.945857  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:21.945705  420885 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/multinode-266395-m02.rawdisk...
	I0108 23:18:21.945899  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Writing magic tar header
	I0108 23:18:21.945926  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Writing SSH key tar header
	I0108 23:18:21.945946  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:21.945835  420885 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02 ...
	I0108 23:18:21.945963  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02
	I0108 23:18:21.945999  420066 main.go:141] libmachine: (multinode-266395-m02) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02 (perms=drwx------)
	I0108 23:18:21.946026  420066 main.go:141] libmachine: (multinode-266395-m02) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0108 23:18:21.946040  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0108 23:18:21.946053  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:18:21.946064  420066 main.go:141] libmachine: (multinode-266395-m02) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0108 23:18:21.946072  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0108 23:18:21.946085  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 23:18:21.946095  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home/jenkins
	I0108 23:18:21.946111  420066 main.go:141] libmachine: (multinode-266395-m02) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0108 23:18:21.946126  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Checking permissions on dir: /home
	I0108 23:18:21.946138  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Skipping /home - not owner
	I0108 23:18:21.946149  420066 main.go:141] libmachine: (multinode-266395-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 23:18:21.946159  420066 main.go:141] libmachine: (multinode-266395-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 23:18:21.946174  420066 main.go:141] libmachine: (multinode-266395-m02) Creating domain...
	I0108 23:18:21.947143  420066 main.go:141] libmachine: (multinode-266395-m02) define libvirt domain using xml: 
	I0108 23:18:21.947178  420066 main.go:141] libmachine: (multinode-266395-m02) <domain type='kvm'>
	I0108 23:18:21.947189  420066 main.go:141] libmachine: (multinode-266395-m02)   <name>multinode-266395-m02</name>
	I0108 23:18:21.947202  420066 main.go:141] libmachine: (multinode-266395-m02)   <memory unit='MiB'>2200</memory>
	I0108 23:18:21.947215  420066 main.go:141] libmachine: (multinode-266395-m02)   <vcpu>2</vcpu>
	I0108 23:18:21.947224  420066 main.go:141] libmachine: (multinode-266395-m02)   <features>
	I0108 23:18:21.947238  420066 main.go:141] libmachine: (multinode-266395-m02)     <acpi/>
	I0108 23:18:21.947249  420066 main.go:141] libmachine: (multinode-266395-m02)     <apic/>
	I0108 23:18:21.947260  420066 main.go:141] libmachine: (multinode-266395-m02)     <pae/>
	I0108 23:18:21.947273  420066 main.go:141] libmachine: (multinode-266395-m02)     
	I0108 23:18:21.947305  420066 main.go:141] libmachine: (multinode-266395-m02)   </features>
	I0108 23:18:21.947325  420066 main.go:141] libmachine: (multinode-266395-m02)   <cpu mode='host-passthrough'>
	I0108 23:18:21.947332  420066 main.go:141] libmachine: (multinode-266395-m02)   
	I0108 23:18:21.947342  420066 main.go:141] libmachine: (multinode-266395-m02)   </cpu>
	I0108 23:18:21.947351  420066 main.go:141] libmachine: (multinode-266395-m02)   <os>
	I0108 23:18:21.947385  420066 main.go:141] libmachine: (multinode-266395-m02)     <type>hvm</type>
	I0108 23:18:21.947396  420066 main.go:141] libmachine: (multinode-266395-m02)     <boot dev='cdrom'/>
	I0108 23:18:21.947402  420066 main.go:141] libmachine: (multinode-266395-m02)     <boot dev='hd'/>
	I0108 23:18:21.947410  420066 main.go:141] libmachine: (multinode-266395-m02)     <bootmenu enable='no'/>
	I0108 23:18:21.947415  420066 main.go:141] libmachine: (multinode-266395-m02)   </os>
	I0108 23:18:21.947424  420066 main.go:141] libmachine: (multinode-266395-m02)   <devices>
	I0108 23:18:21.947430  420066 main.go:141] libmachine: (multinode-266395-m02)     <disk type='file' device='cdrom'>
	I0108 23:18:21.947445  420066 main.go:141] libmachine: (multinode-266395-m02)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/boot2docker.iso'/>
	I0108 23:18:21.947457  420066 main.go:141] libmachine: (multinode-266395-m02)       <target dev='hdc' bus='scsi'/>
	I0108 23:18:21.947466  420066 main.go:141] libmachine: (multinode-266395-m02)       <readonly/>
	I0108 23:18:21.947473  420066 main.go:141] libmachine: (multinode-266395-m02)     </disk>
	I0108 23:18:21.947481  420066 main.go:141] libmachine: (multinode-266395-m02)     <disk type='file' device='disk'>
	I0108 23:18:21.947490  420066 main.go:141] libmachine: (multinode-266395-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 23:18:21.947502  420066 main.go:141] libmachine: (multinode-266395-m02)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/multinode-266395-m02.rawdisk'/>
	I0108 23:18:21.947510  420066 main.go:141] libmachine: (multinode-266395-m02)       <target dev='hda' bus='virtio'/>
	I0108 23:18:21.947519  420066 main.go:141] libmachine: (multinode-266395-m02)     </disk>
	I0108 23:18:21.947529  420066 main.go:141] libmachine: (multinode-266395-m02)     <interface type='network'>
	I0108 23:18:21.947564  420066 main.go:141] libmachine: (multinode-266395-m02)       <source network='mk-multinode-266395'/>
	I0108 23:18:21.947597  420066 main.go:141] libmachine: (multinode-266395-m02)       <model type='virtio'/>
	I0108 23:18:21.947612  420066 main.go:141] libmachine: (multinode-266395-m02)     </interface>
	I0108 23:18:21.947625  420066 main.go:141] libmachine: (multinode-266395-m02)     <interface type='network'>
	I0108 23:18:21.947640  420066 main.go:141] libmachine: (multinode-266395-m02)       <source network='default'/>
	I0108 23:18:21.947651  420066 main.go:141] libmachine: (multinode-266395-m02)       <model type='virtio'/>
	I0108 23:18:21.947669  420066 main.go:141] libmachine: (multinode-266395-m02)     </interface>
	I0108 23:18:21.947687  420066 main.go:141] libmachine: (multinode-266395-m02)     <serial type='pty'>
	I0108 23:18:21.947702  420066 main.go:141] libmachine: (multinode-266395-m02)       <target port='0'/>
	I0108 23:18:21.947714  420066 main.go:141] libmachine: (multinode-266395-m02)     </serial>
	I0108 23:18:21.947730  420066 main.go:141] libmachine: (multinode-266395-m02)     <console type='pty'>
	I0108 23:18:21.947742  420066 main.go:141] libmachine: (multinode-266395-m02)       <target type='serial' port='0'/>
	I0108 23:18:21.947759  420066 main.go:141] libmachine: (multinode-266395-m02)     </console>
	I0108 23:18:21.947774  420066 main.go:141] libmachine: (multinode-266395-m02)     <rng model='virtio'>
	I0108 23:18:21.947810  420066 main.go:141] libmachine: (multinode-266395-m02)       <backend model='random'>/dev/random</backend>
	I0108 23:18:21.947831  420066 main.go:141] libmachine: (multinode-266395-m02)     </rng>
	I0108 23:18:21.947846  420066 main.go:141] libmachine: (multinode-266395-m02)     
	I0108 23:18:21.947864  420066 main.go:141] libmachine: (multinode-266395-m02)     
	I0108 23:18:21.947879  420066 main.go:141] libmachine: (multinode-266395-m02)   </devices>
	I0108 23:18:21.947891  420066 main.go:141] libmachine: (multinode-266395-m02) </domain>
	I0108 23:18:21.947922  420066 main.go:141] libmachine: (multinode-266395-m02) 
	I0108 23:18:21.954898  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:6a:ca:2a in network default
	I0108 23:18:21.955572  420066 main.go:141] libmachine: (multinode-266395-m02) Ensuring networks are active...
	I0108 23:18:21.955602  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:21.956300  420066 main.go:141] libmachine: (multinode-266395-m02) Ensuring network default is active
	I0108 23:18:21.956789  420066 main.go:141] libmachine: (multinode-266395-m02) Ensuring network mk-multinode-266395 is active
	I0108 23:18:21.957225  420066 main.go:141] libmachine: (multinode-266395-m02) Getting domain xml...
	I0108 23:18:21.957993  420066 main.go:141] libmachine: (multinode-266395-m02) Creating domain...
	I0108 23:18:23.201063  420066 main.go:141] libmachine: (multinode-266395-m02) Waiting to get IP...
	I0108 23:18:23.202001  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:23.202464  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:23.202502  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:23.202434  420885 retry.go:31] will retry after 299.476945ms: waiting for machine to come up
	I0108 23:18:23.504075  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:23.504567  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:23.504603  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:23.504503  420885 retry.go:31] will retry after 311.682668ms: waiting for machine to come up
	I0108 23:18:23.818112  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:23.818607  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:23.818641  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:23.818554  420885 retry.go:31] will retry after 404.403287ms: waiting for machine to come up
	I0108 23:18:24.224067  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:24.224479  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:24.224516  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:24.224422  420885 retry.go:31] will retry after 418.605885ms: waiting for machine to come up
	I0108 23:18:24.645265  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:24.645800  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:24.645827  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:24.645758  420885 retry.go:31] will retry after 511.900951ms: waiting for machine to come up
	I0108 23:18:25.159507  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:25.159840  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:25.159877  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:25.159779  420885 retry.go:31] will retry after 870.177829ms: waiting for machine to come up
	I0108 23:18:26.031033  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:26.031427  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:26.031457  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:26.031353  420885 retry.go:31] will retry after 983.465859ms: waiting for machine to come up
	I0108 23:18:27.017154  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:27.017554  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:27.017587  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:27.017497  420885 retry.go:31] will retry after 1.259610782s: waiting for machine to come up
	I0108 23:18:28.278858  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:28.279303  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:28.279326  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:28.279280  420885 retry.go:31] will retry after 1.465333925s: waiting for machine to come up
	I0108 23:18:29.745858  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:29.746296  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:29.746329  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:29.746237  420885 retry.go:31] will retry after 1.740751994s: waiting for machine to come up
	I0108 23:18:31.488707  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:31.489147  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:31.489208  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:31.489113  420885 retry.go:31] will retry after 2.853458587s: waiting for machine to come up
	I0108 23:18:34.344870  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:34.345434  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:34.345467  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:34.345380  420885 retry.go:31] will retry after 3.382067335s: waiting for machine to come up
	I0108 23:18:37.728915  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:37.729394  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:37.729414  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:37.729316  420885 retry.go:31] will retry after 2.806196344s: waiting for machine to come up
	I0108 23:18:40.539277  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:40.539690  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find current IP address of domain multinode-266395-m02 in network mk-multinode-266395
	I0108 23:18:40.539715  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | I0108 23:18:40.539631  420885 retry.go:31] will retry after 5.289947391s: waiting for machine to come up
	I0108 23:18:45.834204  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:45.834739  420066 main.go:141] libmachine: (multinode-266395-m02) Found IP for machine: 192.168.39.214
	I0108 23:18:45.834777  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has current primary IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:45.834787  420066 main.go:141] libmachine: (multinode-266395-m02) Reserving static IP address...
	I0108 23:18:45.835068  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | unable to find host DHCP lease matching {name: "multinode-266395-m02", mac: "52:54:00:ec:9d:f1", ip: "192.168.39.214"} in network mk-multinode-266395
	I0108 23:18:45.909900  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Getting to WaitForSSH function...
	I0108 23:18:45.909941  420066 main.go:141] libmachine: (multinode-266395-m02) Reserved static IP address: 192.168.39.214
	I0108 23:18:45.909956  420066 main.go:141] libmachine: (multinode-266395-m02) Waiting for SSH to be available...
	I0108 23:18:45.912579  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:45.912958  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:45.913001  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:45.913077  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Using SSH client type: external
	I0108 23:18:45.913124  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa (-rw-------)
	I0108 23:18:45.913159  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 23:18:45.913174  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | About to run SSH command:
	I0108 23:18:45.913194  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | exit 0
	I0108 23:18:46.007003  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | SSH cmd err, output: <nil>: 
	I0108 23:18:46.007252  420066 main.go:141] libmachine: (multinode-266395-m02) KVM machine creation complete!
	I0108 23:18:46.007651  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetConfigRaw
	I0108 23:18:46.008232  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:46.008434  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:46.008644  420066 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 23:18:46.008669  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetState
	I0108 23:18:46.009928  420066 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 23:18:46.009962  420066 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 23:18:46.009969  420066 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 23:18:46.009975  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.012473  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.012859  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.012891  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.013021  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:46.013203  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.013360  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.013480  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:46.013623  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:18:46.014076  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:18:46.014095  420066 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 23:18:46.130736  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:18:46.130769  420066 main.go:141] libmachine: Detecting the provisioner...
	I0108 23:18:46.130782  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.133733  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.134066  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.134093  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.134269  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:46.134494  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.134692  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.134849  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:46.135031  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:18:46.135343  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:18:46.135373  420066 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 23:18:46.256135  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 23:18:46.256241  420066 main.go:141] libmachine: found compatible host: buildroot
	I0108 23:18:46.256252  420066 main.go:141] libmachine: Provisioning with buildroot...
	I0108 23:18:46.256261  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:18:46.256519  420066 buildroot.go:166] provisioning hostname "multinode-266395-m02"
	I0108 23:18:46.256545  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:18:46.256718  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.259464  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.259830  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.259853  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.260055  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:46.260280  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.260413  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.260573  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:46.260749  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:18:46.261129  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:18:46.261146  420066 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266395-m02 && echo "multinode-266395-m02" | sudo tee /etc/hostname
	I0108 23:18:46.400053  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266395-m02
	
	I0108 23:18:46.400088  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.402801  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.403165  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.403186  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.403433  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:46.403640  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.403808  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.403979  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:46.404192  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:18:46.404502  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:18:46.404519  420066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266395-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266395-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266395-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:18:46.531911  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:18:46.531952  420066 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:18:46.531974  420066 buildroot.go:174] setting up certificates
	I0108 23:18:46.531988  420066 provision.go:83] configureAuth start
	I0108 23:18:46.531999  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:18:46.532347  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:18:46.536314  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.536729  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.536760  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.536901  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.539249  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.539610  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.539635  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.539809  420066 provision.go:138] copyHostCerts
	I0108 23:18:46.539844  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:18:46.539873  420066 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:18:46.539882  420066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:18:46.539943  420066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:18:46.540023  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:18:46.540040  420066 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:18:46.540046  420066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:18:46.540068  420066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:18:46.540117  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:18:46.540132  420066 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:18:46.540139  420066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:18:46.540158  420066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:18:46.540206  420066 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.multinode-266395-m02 san=[192.168.39.214 192.168.39.214 localhost 127.0.0.1 minikube multinode-266395-m02]
	I0108 23:18:46.777110  420066 provision.go:172] copyRemoteCerts
	I0108 23:18:46.777206  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:18:46.777242  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.780290  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.780770  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.780811  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.781014  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:46.781277  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.781458  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:46.781620  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:18:46.873577  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:18:46.873664  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:18:46.897357  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:18:46.897438  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:18:46.920313  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:18:46.920464  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 23:18:46.944230  420066 provision.go:86] duration metric: configureAuth took 412.227539ms
	I0108 23:18:46.944261  420066 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:18:46.944506  420066 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:18:46.944605  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:46.947034  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.947415  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:46.947439  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:46.947660  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:46.947897  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.948110  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:46.948266  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:46.948436  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:18:46.948898  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:18:46.948924  420066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:18:47.265888  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:18:47.265921  420066 main.go:141] libmachine: Checking connection to Docker...
	I0108 23:18:47.265935  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetURL
	I0108 23:18:47.267203  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | Using libvirt version 6000000
	I0108 23:18:47.269521  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.269895  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.269924  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.270105  420066 main.go:141] libmachine: Docker is up and running!
	I0108 23:18:47.270127  420066 main.go:141] libmachine: Reticulating splines...
	I0108 23:18:47.270136  420066 client.go:171] LocalClient.Create took 25.66108204s
	I0108 23:18:47.270164  420066 start.go:167] duration metric: libmachine.API.Create for "multinode-266395" took 25.661160855s
	I0108 23:18:47.270177  420066 start.go:300] post-start starting for "multinode-266395-m02" (driver="kvm2")
	I0108 23:18:47.270195  420066 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:18:47.270220  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:47.270504  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:18:47.270528  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:47.272924  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.273258  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.273285  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.273471  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:47.273668  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:47.273851  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:47.273999  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:18:47.366760  420066 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:18:47.371402  420066 command_runner.go:130] > NAME=Buildroot
	I0108 23:18:47.371430  420066 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 23:18:47.371437  420066 command_runner.go:130] > ID=buildroot
	I0108 23:18:47.371444  420066 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 23:18:47.371450  420066 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 23:18:47.371553  420066 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:18:47.371585  420066 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:18:47.371656  420066 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:18:47.371738  420066 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:18:47.371750  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /etc/ssl/certs/4070942.pem
	I0108 23:18:47.371865  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:18:47.381935  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:18:47.403281  420066 start.go:303] post-start completed in 133.084657ms
	I0108 23:18:47.403330  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetConfigRaw
	I0108 23:18:47.403919  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:18:47.406610  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.406956  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.406986  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.407285  420066 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:18:47.407534  420066 start.go:128] duration metric: createHost completed in 25.816643348s
	I0108 23:18:47.407596  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:47.409959  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.410417  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.410464  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.410552  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:47.410770  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:47.410978  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:47.411109  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:47.411259  420066 main.go:141] libmachine: Using SSH client type: native
	I0108 23:18:47.411593  420066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:18:47.411606  420066 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:18:47.532444  420066 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704755927.515952091
	
	I0108 23:18:47.532466  420066 fix.go:206] guest clock: 1704755927.515952091
	I0108 23:18:47.532475  420066 fix.go:219] Guest: 2024-01-08 23:18:47.515952091 +0000 UTC Remote: 2024-01-08 23:18:47.40754991 +0000 UTC m=+93.762947154 (delta=108.402181ms)
	I0108 23:18:47.532503  420066 fix.go:190] guest clock delta is within tolerance: 108.402181ms
	I0108 23:18:47.532507  420066 start.go:83] releasing machines lock for "multinode-266395-m02", held for 25.94169677s
	I0108 23:18:47.532539  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:47.532862  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:18:47.535652  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.535939  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.535970  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.538583  420066 out.go:177] * Found network options:
	I0108 23:18:47.540229  420066 out.go:177]   - NO_PROXY=192.168.39.18
	W0108 23:18:47.541735  420066 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:18:47.541776  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:47.542363  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:47.542564  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:18:47.542643  420066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:18:47.542679  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	W0108 23:18:47.542741  420066 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:18:47.542852  420066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:18:47.542894  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:18:47.545524  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.545825  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.545911  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.545956  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.546090  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:47.546219  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:47.546248  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:47.546270  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:47.546389  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:18:47.546465  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:47.546556  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:18:47.546617  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:18:47.546657  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:18:47.546757  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:18:47.790222  420066 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:18:47.790308  420066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:18:47.796477  420066 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 23:18:47.796613  420066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:18:47.796686  420066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:18:47.811865  420066 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 23:18:47.811893  420066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:18:47.811901  420066 start.go:475] detecting cgroup driver to use...
	I0108 23:18:47.811963  420066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:18:47.828892  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:18:47.840947  420066 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:18:47.841001  420066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:18:47.853019  420066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:18:47.865556  420066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:18:47.878423  420066 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 23:18:47.974093  420066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:18:48.093770  420066 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 23:18:48.093927  420066 docker.go:219] disabling docker service ...
	I0108 23:18:48.094001  420066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:18:48.107532  420066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:18:48.119490  420066 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 23:18:48.120046  420066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:18:48.133502  420066 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 23:18:48.252015  420066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:18:48.388695  420066 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 23:18:48.388733  420066 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 23:18:48.388898  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:18:48.400997  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:18:48.417464  420066 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:18:48.417752  420066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:18:48.417821  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:18:48.426652  420066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:18:48.426719  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:18:48.435438  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:18:48.444043  420066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:18:48.452691  420066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:18:48.461861  420066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:18:48.469489  420066 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:18:48.469528  420066 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:18:48.469574  420066 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 23:18:48.481501  420066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:18:48.490326  420066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:18:48.599268  420066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:18:48.766197  420066 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:18:48.766354  420066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:18:48.771251  420066 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:18:48.771279  420066 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:18:48.771289  420066 command_runner.go:130] > Device: 16h/22d	Inode: 698         Links: 1
	I0108 23:18:48.771300  420066 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:18:48.771308  420066 command_runner.go:130] > Access: 2024-01-08 23:18:48.735792238 +0000
	I0108 23:18:48.771317  420066 command_runner.go:130] > Modify: 2024-01-08 23:18:48.735792238 +0000
	I0108 23:18:48.771326  420066 command_runner.go:130] > Change: 2024-01-08 23:18:48.735792238 +0000
	I0108 23:18:48.771333  420066 command_runner.go:130] >  Birth: -
	I0108 23:18:48.771350  420066 start.go:543] Will wait 60s for crictl version
	I0108 23:18:48.771412  420066 ssh_runner.go:195] Run: which crictl
	I0108 23:18:48.774938  420066 command_runner.go:130] > /usr/bin/crictl
	I0108 23:18:48.775176  420066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:18:48.817184  420066 command_runner.go:130] > Version:  0.1.0
	I0108 23:18:48.817214  420066 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:18:48.817229  420066 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 23:18:48.817238  420066 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:18:48.817369  420066 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:18:48.817460  420066 ssh_runner.go:195] Run: crio --version
	I0108 23:18:48.860239  420066 command_runner.go:130] > crio version 1.24.1
	I0108 23:18:48.860264  420066 command_runner.go:130] > Version:          1.24.1
	I0108 23:18:48.860271  420066 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:18:48.860275  420066 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:18:48.860286  420066 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:18:48.860291  420066 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:18:48.860295  420066 command_runner.go:130] > Compiler:         gc
	I0108 23:18:48.860299  420066 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:18:48.860305  420066 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:18:48.860313  420066 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:18:48.860317  420066 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:18:48.860321  420066 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:18:48.861807  420066 ssh_runner.go:195] Run: crio --version
	I0108 23:18:48.903148  420066 command_runner.go:130] > crio version 1.24.1
	I0108 23:18:48.903180  420066 command_runner.go:130] > Version:          1.24.1
	I0108 23:18:48.903191  420066 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:18:48.903198  420066 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:18:48.903207  420066 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:18:48.903215  420066 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:18:48.903222  420066 command_runner.go:130] > Compiler:         gc
	I0108 23:18:48.903229  420066 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:18:48.903245  420066 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:18:48.903260  420066 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:18:48.903267  420066 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:18:48.903278  420066 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:18:48.907466  420066 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 23:18:48.908776  420066 out.go:177]   - env NO_PROXY=192.168.39.18
	I0108 23:18:48.910001  420066 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:18:48.912877  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:48.913294  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:18:48.913328  420066 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:18:48.913581  420066 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:18:48.917819  420066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:18:48.930443  420066 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395 for IP: 192.168.39.214
	I0108 23:18:48.930488  420066 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:18:48.930648  420066 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:18:48.930711  420066 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:18:48.930727  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:18:48.930740  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:18:48.930752  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:18:48.930763  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:18:48.930816  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:18:48.930844  420066 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:18:48.930855  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:18:48.930878  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:18:48.930900  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:18:48.930925  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:18:48.930966  420066 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:18:48.930991  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /usr/share/ca-certificates/4070942.pem
	I0108 23:18:48.931004  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:18:48.931015  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem -> /usr/share/ca-certificates/407094.pem
	I0108 23:18:48.931409  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:18:48.957051  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:18:48.979728  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:18:49.002455  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:18:49.027693  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:18:49.050323  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:18:49.076585  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:18:49.101735  420066 ssh_runner.go:195] Run: openssl version
	I0108 23:18:49.107471  420066 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 23:18:49.107548  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:18:49.116663  420066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:18:49.120876  420066 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:18:49.120898  420066 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:18:49.120941  420066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:18:49.126235  420066 command_runner.go:130] > b5213941
	I0108 23:18:49.126324  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:18:49.135740  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:18:49.144831  420066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:18:49.149590  420066 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:18:49.149758  420066 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:18:49.149815  420066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:18:49.155309  420066 command_runner.go:130] > 51391683
	I0108 23:18:49.155651  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:18:49.165477  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:18:49.175435  420066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:18:49.180251  420066 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:18:49.180542  420066 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:18:49.180597  420066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:18:49.186230  420066 command_runner.go:130] > 3ec20f2e
	I0108 23:18:49.186513  420066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:18:49.196465  420066 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:18:49.200638  420066 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:18:49.200678  420066 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:18:49.200765  420066 ssh_runner.go:195] Run: crio config
	I0108 23:18:49.256796  420066 command_runner.go:130] ! time="2024-01-08 23:18:49.243374064Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 23:18:49.257033  420066 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:18:49.261669  420066 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:18:49.261688  420066 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:18:49.261695  420066 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:18:49.261698  420066 command_runner.go:130] > #
	I0108 23:18:49.261708  420066 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:18:49.261719  420066 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:18:49.261735  420066 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:18:49.261749  420066 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:18:49.261756  420066 command_runner.go:130] > # reload'.
	I0108 23:18:49.261762  420066 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:18:49.261771  420066 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:18:49.261777  420066 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:18:49.261785  420066 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:18:49.261790  420066 command_runner.go:130] > [crio]
	I0108 23:18:49.261798  420066 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:18:49.261807  420066 command_runner.go:130] > # containers images, in this directory.
	I0108 23:18:49.261818  420066 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 23:18:49.261837  420066 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:18:49.261852  420066 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 23:18:49.261858  420066 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:18:49.261864  420066 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:18:49.261869  420066 command_runner.go:130] > storage_driver = "overlay"
	I0108 23:18:49.261875  420066 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:18:49.261884  420066 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:18:49.261891  420066 command_runner.go:130] > storage_option = [
	I0108 23:18:49.261898  420066 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 23:18:49.261904  420066 command_runner.go:130] > ]
	I0108 23:18:49.261914  420066 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:18:49.261929  420066 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:18:49.261939  420066 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:18:49.261949  420066 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:18:49.261960  420066 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:18:49.261971  420066 command_runner.go:130] > # always happen on a node reboot
	I0108 23:18:49.261976  420066 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:18:49.261981  420066 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:18:49.261987  420066 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:18:49.262000  420066 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:18:49.262009  420066 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:18:49.262022  420066 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:18:49.262039  420066 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:18:49.262047  420066 command_runner.go:130] > # internal_wipe = true
	I0108 23:18:49.262057  420066 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:18:49.262070  420066 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:18:49.262079  420066 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:18:49.262090  420066 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:18:49.262100  420066 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:18:49.262107  420066 command_runner.go:130] > [crio.api]
	I0108 23:18:49.262117  420066 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:18:49.262125  420066 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:18:49.262135  420066 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:18:49.262143  420066 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:18:49.262155  420066 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:18:49.262164  420066 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:18:49.262172  420066 command_runner.go:130] > # stream_port = "0"
	I0108 23:18:49.262178  420066 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:18:49.262185  420066 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:18:49.262200  420066 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:18:49.262210  420066 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:18:49.262223  420066 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:18:49.262236  420066 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:18:49.262245  420066 command_runner.go:130] > # minutes.
	I0108 23:18:49.262252  420066 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:18:49.262261  420066 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:18:49.262271  420066 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:18:49.262282  420066 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:18:49.262295  420066 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:18:49.262309  420066 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:18:49.262321  420066 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:18:49.262331  420066 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:18:49.262341  420066 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:18:49.262348  420066 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 23:18:49.262359  420066 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:18:49.262371  420066 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 23:18:49.262392  420066 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:18:49.262404  420066 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:18:49.262414  420066 command_runner.go:130] > [crio.runtime]
	I0108 23:18:49.262425  420066 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:18:49.262453  420066 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:18:49.262460  420066 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:18:49.262470  420066 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:18:49.262481  420066 command_runner.go:130] > # default_ulimits = [
	I0108 23:18:49.262487  420066 command_runner.go:130] > # ]
	I0108 23:18:49.262500  420066 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:18:49.262509  420066 command_runner.go:130] > # no_pivot = false
	I0108 23:18:49.262516  420066 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:18:49.262528  420066 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:18:49.262540  420066 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:18:49.262553  420066 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:18:49.262564  420066 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:18:49.262575  420066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:18:49.262586  420066 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 23:18:49.262596  420066 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:18:49.262603  420066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:18:49.262612  420066 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:18:49.262627  420066 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:18:49.262639  420066 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:18:49.262652  420066 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:18:49.262662  420066 command_runner.go:130] > conmon_env = [
	I0108 23:18:49.262675  420066 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 23:18:49.262683  420066 command_runner.go:130] > ]
	I0108 23:18:49.262688  420066 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:18:49.262699  420066 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:18:49.262713  420066 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:18:49.262724  420066 command_runner.go:130] > # default_env = [
	I0108 23:18:49.262731  420066 command_runner.go:130] > # ]
	I0108 23:18:49.262741  420066 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:18:49.262750  420066 command_runner.go:130] > # selinux = false
	I0108 23:18:49.262761  420066 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:18:49.262771  420066 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:18:49.262776  420066 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:18:49.262786  420066 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:18:49.262796  420066 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:18:49.262809  420066 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:18:49.262822  420066 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:18:49.262833  420066 command_runner.go:130] > # which might increase security.
	I0108 23:18:49.262841  420066 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 23:18:49.262854  420066 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:18:49.262863  420066 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:18:49.262873  420066 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:18:49.262888  420066 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:18:49.262900  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:18:49.262910  420066 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:18:49.262921  420066 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:18:49.262931  420066 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:18:49.262940  420066 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:18:49.262946  420066 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:18:49.262956  420066 command_runner.go:130] > # irqbalance daemon.
	I0108 23:18:49.262965  420066 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:18:49.262980  420066 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:18:49.262991  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:18:49.262999  420066 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:18:49.263010  420066 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:18:49.263020  420066 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:18:49.263030  420066 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:18:49.263036  420066 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:18:49.263048  420066 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:18:49.263063  420066 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:18:49.263070  420066 command_runner.go:130] > # will be added.
	I0108 23:18:49.263080  420066 command_runner.go:130] > # default_capabilities = [
	I0108 23:18:49.263088  420066 command_runner.go:130] > # 	"CHOWN",
	I0108 23:18:49.263097  420066 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:18:49.263104  420066 command_runner.go:130] > # 	"FSETID",
	I0108 23:18:49.263112  420066 command_runner.go:130] > # 	"FOWNER",
	I0108 23:18:49.263116  420066 command_runner.go:130] > # 	"SETGID",
	I0108 23:18:49.263124  420066 command_runner.go:130] > # 	"SETUID",
	I0108 23:18:49.263131  420066 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:18:49.263141  420066 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:18:49.263149  420066 command_runner.go:130] > # 	"KILL",
	I0108 23:18:49.263158  420066 command_runner.go:130] > # ]
	I0108 23:18:49.263168  420066 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:18:49.263181  420066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:18:49.263191  420066 command_runner.go:130] > # default_sysctls = [
	I0108 23:18:49.263198  420066 command_runner.go:130] > # ]
	I0108 23:18:49.263204  420066 command_runner.go:130] > # List of devices on the host that a
	I0108 23:18:49.263216  420066 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:18:49.263226  420066 command_runner.go:130] > # allowed_devices = [
	I0108 23:18:49.263236  420066 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:18:49.263245  420066 command_runner.go:130] > # ]
	I0108 23:18:49.263253  420066 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:18:49.263268  420066 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:18:49.263280  420066 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:18:49.263315  420066 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:18:49.263327  420066 command_runner.go:130] > # additional_devices = [
	I0108 23:18:49.263333  420066 command_runner.go:130] > # ]
	I0108 23:18:49.263342  420066 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:18:49.263352  420066 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:18:49.263372  420066 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:18:49.263384  420066 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:18:49.263390  420066 command_runner.go:130] > # ]
	I0108 23:18:49.263401  420066 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:18:49.263414  420066 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:18:49.263424  420066 command_runner.go:130] > # Defaults to false.
	I0108 23:18:49.263438  420066 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:18:49.263446  420066 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:18:49.263460  420066 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:18:49.263471  420066 command_runner.go:130] > # hooks_dir = [
	I0108 23:18:49.263482  420066 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:18:49.263488  420066 command_runner.go:130] > # ]
	I0108 23:18:49.263501  420066 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:18:49.263515  420066 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:18:49.263523  420066 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:18:49.263527  420066 command_runner.go:130] > #
	I0108 23:18:49.263537  420066 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:18:49.263552  420066 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:18:49.263564  420066 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:18:49.263573  420066 command_runner.go:130] > #
	I0108 23:18:49.263583  420066 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:18:49.263596  420066 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:18:49.263605  420066 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:18:49.263613  420066 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:18:49.263618  420066 command_runner.go:130] > #
	I0108 23:18:49.263629  420066 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:18:49.263639  420066 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:18:49.263652  420066 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:18:49.263662  420066 command_runner.go:130] > pids_limit = 1024
	I0108 23:18:49.263672  420066 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:18:49.263688  420066 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:18:49.263698  420066 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:18:49.263710  420066 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:18:49.263721  420066 command_runner.go:130] > # log_size_max = -1
	I0108 23:18:49.263732  420066 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:18:49.263742  420066 command_runner.go:130] > # log_to_journald = false
	I0108 23:18:49.263754  420066 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:18:49.263765  420066 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:18:49.263776  420066 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:18:49.263784  420066 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:18:49.263792  420066 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:18:49.263803  420066 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:18:49.263813  420066 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:18:49.263823  420066 command_runner.go:130] > # read_only = false
	I0108 23:18:49.263835  420066 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:18:49.263849  420066 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:18:49.263859  420066 command_runner.go:130] > # live configuration reload.
	I0108 23:18:49.263866  420066 command_runner.go:130] > # log_level = "info"
	I0108 23:18:49.263873  420066 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:18:49.263884  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:18:49.263894  420066 command_runner.go:130] > # log_filter = ""
	I0108 23:18:49.263905  420066 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:18:49.263916  420066 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:18:49.263926  420066 command_runner.go:130] > # separated by comma.
	I0108 23:18:49.263936  420066 command_runner.go:130] > # uid_mappings = ""
	I0108 23:18:49.263946  420066 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:18:49.263955  420066 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:18:49.263961  420066 command_runner.go:130] > # separated by comma.
	I0108 23:18:49.263972  420066 command_runner.go:130] > # gid_mappings = ""
	I0108 23:18:49.263986  420066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:18:49.264000  420066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:18:49.264011  420066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:18:49.264021  420066 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:18:49.264031  420066 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:18:49.264041  420066 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:18:49.264051  420066 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:18:49.264062  420066 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:18:49.264075  420066 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:18:49.264088  420066 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:18:49.264100  420066 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:18:49.264108  420066 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:18:49.264119  420066 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:18:49.264128  420066 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:18:49.264136  420066 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:18:49.264149  420066 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:18:49.264160  420066 command_runner.go:130] > drop_infra_ctr = false
	I0108 23:18:49.264173  420066 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:18:49.264184  420066 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:18:49.264199  420066 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:18:49.264206  420066 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:18:49.264212  420066 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:18:49.264225  420066 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:18:49.264236  420066 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:18:49.264251  420066 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:18:49.264261  420066 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 23:18:49.264274  420066 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:18:49.264287  420066 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:18:49.264296  420066 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:18:49.264302  420066 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:18:49.264313  420066 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:18:49.264329  420066 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:18:49.264348  420066 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:18:49.264359  420066 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:18:49.264380  420066 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:18:49.264389  420066 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:18:49.264396  420066 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:18:49.264405  420066 command_runner.go:130] > # ]
	I0108 23:18:49.264416  420066 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:18:49.264434  420066 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:18:49.264452  420066 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:18:49.264464  420066 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:18:49.264470  420066 command_runner.go:130] > #
	I0108 23:18:49.264478  420066 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:18:49.264490  420066 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:18:49.264498  420066 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:18:49.264509  420066 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:18:49.264519  420066 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:18:49.264530  420066 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:18:49.264538  420066 command_runner.go:130] > # Where:
	I0108 23:18:49.264549  420066 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:18:49.264559  420066 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:18:49.264569  420066 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:18:49.264583  420066 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:18:49.264593  420066 command_runner.go:130] > #   in $PATH.
	I0108 23:18:49.264603  420066 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:18:49.264614  420066 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:18:49.264627  420066 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:18:49.264637  420066 command_runner.go:130] > #   state.
	I0108 23:18:49.264643  420066 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:18:49.264656  420066 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:18:49.264672  420066 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:18:49.264684  420066 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:18:49.264697  420066 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:18:49.264711  420066 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:18:49.264721  420066 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:18:49.264730  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:18:49.264743  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:18:49.264757  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:18:49.264770  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:18:49.264785  420066 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:18:49.264798  420066 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:18:49.264807  420066 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:18:49.264818  420066 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:18:49.264829  420066 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:18:49.264840  420066 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:18:49.264851  420066 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 23:18:49.264861  420066 command_runner.go:130] > runtime_type = "oci"
	I0108 23:18:49.264868  420066 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:18:49.264878  420066 command_runner.go:130] > runtime_config_path = ""
	I0108 23:18:49.264888  420066 command_runner.go:130] > monitor_path = ""
	I0108 23:18:49.264893  420066 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:18:49.264900  420066 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:18:49.264911  420066 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:18:49.264922  420066 command_runner.go:130] > # running containers
	I0108 23:18:49.264930  420066 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:18:49.264943  420066 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:18:49.264974  420066 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:18:49.264983  420066 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:18:49.264990  420066 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:18:49.264998  420066 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:18:49.265010  420066 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:18:49.265018  420066 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:18:49.265030  420066 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:18:49.265040  420066 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:18:49.265054  420066 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:18:49.265064  420066 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:18:49.265073  420066 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:18:49.265088  420066 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:18:49.265104  420066 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:18:49.265116  420066 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:18:49.265133  420066 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:18:49.265148  420066 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:18:49.265157  420066 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:18:49.265171  420066 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:18:49.265181  420066 command_runner.go:130] > # Example:
	I0108 23:18:49.265190  420066 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:18:49.265201  420066 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:18:49.265213  420066 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:18:49.265223  420066 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:18:49.265232  420066 command_runner.go:130] > # cpuset = 0
	I0108 23:18:49.265237  420066 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:18:49.265242  420066 command_runner.go:130] > # Where:
	I0108 23:18:49.265250  420066 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:18:49.265266  420066 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:18:49.265278  420066 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:18:49.265288  420066 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:18:49.265303  420066 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:18:49.265316  420066 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:18:49.265322  420066 command_runner.go:130] > # 
	I0108 23:18:49.265329  420066 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:18:49.265337  420066 command_runner.go:130] > #
	I0108 23:18:49.265348  420066 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:18:49.265361  420066 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:18:49.265375  420066 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:18:49.265385  420066 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:18:49.265398  420066 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:18:49.265406  420066 command_runner.go:130] > [crio.image]
	I0108 23:18:49.265412  420066 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:18:49.265422  420066 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:18:49.265441  420066 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:18:49.265455  420066 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:18:49.265465  420066 command_runner.go:130] > # global_auth_file = ""
	I0108 23:18:49.265474  420066 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:18:49.265485  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:18:49.265493  420066 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:18:49.265503  420066 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:18:49.265513  420066 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:18:49.265526  420066 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:18:49.265538  420066 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:18:49.265550  420066 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:18:49.265563  420066 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:18:49.265573  420066 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:18:49.265583  420066 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:18:49.265588  420066 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:18:49.265601  420066 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:18:49.265615  420066 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:18:49.265629  420066 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:18:49.265642  420066 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:18:49.265653  420066 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:18:49.265663  420066 command_runner.go:130] > # signature_policy = ""
	I0108 23:18:49.265669  420066 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:18:49.265682  420066 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:18:49.265692  420066 command_runner.go:130] > # changing them here.
	I0108 23:18:49.265703  420066 command_runner.go:130] > # insecure_registries = [
	I0108 23:18:49.265709  420066 command_runner.go:130] > # ]
	I0108 23:18:49.265724  420066 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:18:49.265735  420066 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:18:49.265745  420066 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:18:49.265752  420066 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:18:49.265760  420066 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:18:49.265771  420066 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:18:49.265781  420066 command_runner.go:130] > # CNI plugins.
	I0108 23:18:49.265788  420066 command_runner.go:130] > [crio.network]
	I0108 23:18:49.265801  420066 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:18:49.265813  420066 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:18:49.265824  420066 command_runner.go:130] > # cni_default_network = ""
	I0108 23:18:49.265835  420066 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:18:49.265842  420066 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:18:49.265851  420066 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:18:49.265861  420066 command_runner.go:130] > # plugin_dirs = [
	I0108 23:18:49.265868  420066 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:18:49.265877  420066 command_runner.go:130] > # ]
	I0108 23:18:49.265887  420066 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:18:49.265896  420066 command_runner.go:130] > [crio.metrics]
	I0108 23:18:49.265905  420066 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:18:49.265915  420066 command_runner.go:130] > enable_metrics = true
	I0108 23:18:49.265921  420066 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:18:49.265928  420066 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:18:49.265938  420066 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:18:49.265953  420066 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:18:49.265966  420066 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:18:49.265973  420066 command_runner.go:130] > # metrics_collectors = [
	I0108 23:18:49.265982  420066 command_runner.go:130] > # 	"operations",
	I0108 23:18:49.265991  420066 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:18:49.266000  420066 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:18:49.266008  420066 command_runner.go:130] > # 	"operations_errors",
	I0108 23:18:49.266013  420066 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:18:49.266017  420066 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:18:49.266022  420066 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:18:49.266026  420066 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:18:49.266031  420066 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:18:49.266039  420066 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:18:49.266050  420066 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:18:49.266061  420066 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:18:49.266068  420066 command_runner.go:130] > # 	"containers_oom",
	I0108 23:18:49.266078  420066 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:18:49.266087  420066 command_runner.go:130] > # 	"operations_total",
	I0108 23:18:49.266095  420066 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:18:49.266105  420066 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:18:49.266111  420066 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:18:49.266117  420066 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:18:49.266122  420066 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:18:49.266129  420066 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:18:49.266134  420066 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:18:49.266140  420066 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:18:49.266144  420066 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:18:49.266150  420066 command_runner.go:130] > # ]
	I0108 23:18:49.266155  420066 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:18:49.266161  420066 command_runner.go:130] > # metrics_port = 9090
	I0108 23:18:49.266166  420066 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:18:49.266170  420066 command_runner.go:130] > # metrics_socket = ""
	I0108 23:18:49.266176  420066 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:18:49.266185  420066 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:18:49.266199  420066 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:18:49.266210  420066 command_runner.go:130] > # certificate on any modification event.
	I0108 23:18:49.266220  420066 command_runner.go:130] > # metrics_cert = ""
	I0108 23:18:49.266229  420066 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:18:49.266241  420066 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:18:49.266250  420066 command_runner.go:130] > # metrics_key = ""
	I0108 23:18:49.266258  420066 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:18:49.266264  420066 command_runner.go:130] > [crio.tracing]
	I0108 23:18:49.266270  420066 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:18:49.266274  420066 command_runner.go:130] > # enable_tracing = false
	I0108 23:18:49.266279  420066 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:18:49.266284  420066 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:18:49.266289  420066 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:18:49.266298  420066 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:18:49.266304  420066 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:18:49.266310  420066 command_runner.go:130] > [crio.stats]
	I0108 23:18:49.266315  420066 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:18:49.266322  420066 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:18:49.266327  420066 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:18:49.266392  420066 cni.go:84] Creating CNI manager for ""
	I0108 23:18:49.266401  420066 cni.go:136] 2 nodes found, recommending kindnet
	I0108 23:18:49.266411  420066 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:18:49.266440  420066 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266395 NodeName:multinode-266395-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:18:49.266557  420066 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266395-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:18:49.266608  420066 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-266395-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:18:49.266657  420066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:18:49.275634  420066 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0108 23:18:49.275829  420066 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0108 23:18:49.275902  420066 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0108 23:18:49.284602  420066 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0108 23:18:49.284631  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 23:18:49.284718  420066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 23:18:49.284729  420066 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0108 23:18:49.284774  420066 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0108 23:18:49.289307  420066 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 23:18:49.289340  420066 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 23:18:49.289357  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0108 23:18:50.503954  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:18:50.520389  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 23:18:50.520522  420066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 23:18:50.524721  420066 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 23:18:50.524909  420066 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 23:18:50.524941  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0108 23:18:52.375713  420066 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 23:18:52.375807  420066 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 23:18:52.380709  420066 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 23:18:52.380767  420066 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 23:18:52.380795  420066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0108 23:18:52.604358  420066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 23:18:52.613706  420066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 23:18:52.630268  420066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:18:52.646054  420066 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0108 23:18:52.649918  420066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:18:52.660897  420066 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:18:52.661169  420066 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:18:52.661504  420066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:18:52.661546  420066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:52.676024  420066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I0108 23:18:52.676424  420066 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:52.676883  420066 main.go:141] libmachine: Using API Version  1
	I0108 23:18:52.676906  420066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:52.677195  420066 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:52.677385  420066 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:18:52.677528  420066 start.go:304] JoinCluster: &{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:18:52.677641  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 23:18:52.677658  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:18:52.680449  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:18:52.680837  420066 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:18:52.680875  420066 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:18:52.681042  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:18:52.681227  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:18:52.681433  420066 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:18:52.681579  420066 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:18:52.855675  420066 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g5g6zm.2ihx86bhfv8bgkr7 --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 23:18:52.860827  420066 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:18:52.860884  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g5g6zm.2ihx86bhfv8bgkr7 --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-266395-m02"
	I0108 23:18:52.904245  420066 command_runner.go:130] ! W0108 23:18:52.894861     821 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 23:18:53.032369  420066 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 23:18:55.228225  420066 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 23:18:55.228251  420066 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 23:18:55.228261  420066 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 23:18:55.228275  420066 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:18:55.228297  420066 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:18:55.228305  420066 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:18:55.228312  420066 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 23:18:55.228318  420066 command_runner.go:130] > This node has joined the cluster:
	I0108 23:18:55.228328  420066 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 23:18:55.228333  420066 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 23:18:55.228342  420066 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 23:18:55.228363  420066 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g5g6zm.2ihx86bhfv8bgkr7 --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-266395-m02": (2.367462761s)
	I0108 23:18:55.228398  420066 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 23:18:55.464127  420066 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 23:18:55.464264  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-266395 minikube.k8s.io/updated_at=2024_01_08T23_18_55_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:18:55.571743  420066 command_runner.go:130] > node/multinode-266395-m02 labeled
	I0108 23:18:55.573245  420066 start.go:306] JoinCluster complete in 2.895713643s
	I0108 23:18:55.573264  420066 cni.go:84] Creating CNI manager for ""
	I0108 23:18:55.573270  420066 cni.go:136] 2 nodes found, recommending kindnet
	I0108 23:18:55.573317  420066 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:18:55.589571  420066 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:18:55.589605  420066 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 23:18:55.589615  420066 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 23:18:55.589626  420066 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:18:55.589645  420066 command_runner.go:130] > Access: 2024-01-08 23:17:27.217891140 +0000
	I0108 23:18:55.589653  420066 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 23:18:55.589660  420066 command_runner.go:130] > Change: 2024-01-08 23:17:25.384891140 +0000
	I0108 23:18:55.589668  420066 command_runner.go:130] >  Birth: -
	I0108 23:18:55.589960  420066 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:18:55.589984  420066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:18:55.617748  420066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:18:56.401291  420066 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:18:56.401328  420066 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:18:56.401337  420066 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 23:18:56.401345  420066 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 23:18:56.401746  420066 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:18:56.402001  420066 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:18:56.402349  420066 round_trippers.go:463] GET https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:18:56.402363  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:56.402373  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:56.402378  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:56.404632  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:56.404651  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:56.404657  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:56.404663  420066 round_trippers.go:580]     Content-Length: 291
	I0108 23:18:56.404668  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:56 GMT
	I0108 23:18:56.404673  420066 round_trippers.go:580]     Audit-Id: 2e00c33f-61e5-4c50-a7dd-a1dba73ca98e
	I0108 23:18:56.404678  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:56.404683  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:56.404689  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:56.404721  420066 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"418","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 23:18:56.404802  420066 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266395" context rescaled to 1 replicas
	I0108 23:18:56.404830  420066 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:18:56.461796  420066 out.go:177] * Verifying Kubernetes components...
	I0108 23:18:56.463699  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:18:56.490293  420066 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:18:56.490591  420066 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:18:56.490834  420066 node_ready.go:35] waiting up to 6m0s for node "multinode-266395-m02" to be "Ready" ...
	I0108 23:18:56.490911  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:56.490921  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:56.490929  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:56.490935  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:56.494017  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:56.494079  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:56.494091  420066 round_trippers.go:580]     Audit-Id: 3c8be853-efae-4a48-92cc-35dbacb20594
	I0108 23:18:56.494100  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:56.494115  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:56.494127  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:56.494138  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:56.494146  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:56 GMT
	I0108 23:18:56.494265  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:56.991329  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:56.991354  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:56.991382  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:56.991392  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:56.993987  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:56.994001  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:56.994008  420066 round_trippers.go:580]     Audit-Id: 63ff08bf-7e45-47ca-bf91-c9edb491be2a
	I0108 23:18:56.994013  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:56.994018  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:56.994023  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:56.994029  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:56.994034  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:56 GMT
	I0108 23:18:56.994468  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:57.491113  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:57.491139  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:57.491148  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:57.491155  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:57.494046  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:57.494070  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:57.494077  420066 round_trippers.go:580]     Audit-Id: e11f1bc7-30ff-4cd7-bd9b-c732e1448347
	I0108 23:18:57.494083  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:57.494088  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:57.494093  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:57.494098  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:57.494108  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:57 GMT
	I0108 23:18:57.494319  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:57.991575  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:57.991602  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:57.991611  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:57.991618  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:57.994856  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:57.994881  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:57.994888  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:57.994894  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:57.994899  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:57 GMT
	I0108 23:18:57.994904  420066 round_trippers.go:580]     Audit-Id: 30ce4dc9-3910-4c3a-8961-db05c0499552
	I0108 23:18:57.994917  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:57.994927  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:57.995486  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:58.491177  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:58.491207  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:58.491216  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:58.491222  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:58.494206  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:58.494231  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:58.494237  420066 round_trippers.go:580]     Audit-Id: 5f7db234-3ab1-45f0-84be-40709e3692fc
	I0108 23:18:58.494243  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:58.494248  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:58.494253  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:58.494264  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:58.494272  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:58 GMT
	I0108 23:18:58.494497  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:58.494807  420066 node_ready.go:58] node "multinode-266395-m02" has status "Ready":"False"
	I0108 23:18:58.992002  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:58.992027  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:58.992036  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:58.992042  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:58.995195  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:18:58.995228  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:58.995242  420066 round_trippers.go:580]     Audit-Id: da61a131-79ab-4e4c-83d4-4bfb50159fe9
	I0108 23:18:58.995252  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:58.995260  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:58.995267  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:58.995276  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:58.995283  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:58 GMT
	I0108 23:18:58.995528  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:59.491176  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:59.491212  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:59.491225  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:59.491236  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:59.494099  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:59.494132  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:59.494142  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:59.494150  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:59 GMT
	I0108 23:18:59.494159  420066 round_trippers.go:580]     Audit-Id: bcb71968-01d4-4e3c-9466-1cd528f61434
	I0108 23:18:59.494173  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:59.494181  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:59.494189  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:59.494747  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:18:59.991432  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:18:59.991461  420066 round_trippers.go:469] Request Headers:
	I0108 23:18:59.991473  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:18:59.991480  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:18:59.994406  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:18:59.994437  420066 round_trippers.go:577] Response Headers:
	I0108 23:18:59.994448  420066 round_trippers.go:580]     Audit-Id: 08f0900b-bb13-4458-9381-e00a07e1c61e
	I0108 23:18:59.994466  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:18:59.994474  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:18:59.994482  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:18:59.994496  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:18:59.994509  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:18:59 GMT
	I0108 23:18:59.994643  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:00.491428  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:00.491466  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:00.491478  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:00.491486  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:00.495024  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:00.495055  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:00.495069  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:00.495082  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:00 GMT
	I0108 23:19:00.495093  420066 round_trippers.go:580]     Audit-Id: 47083958-1fec-4fd9-a1dc-6f3f491360bb
	I0108 23:19:00.495103  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:00.495114  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:00.495125  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:00.495882  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:00.496335  420066 node_ready.go:58] node "multinode-266395-m02" has status "Ready":"False"
	I0108 23:19:00.991205  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:00.991227  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:00.991236  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:00.991242  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:00.994126  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:00.994142  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:00.994148  420066 round_trippers.go:580]     Audit-Id: a88b7fe9-08d1-466b-921d-ed27d3e85e43
	I0108 23:19:00.994153  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:00.994158  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:00.994163  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:00.994168  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:00.994174  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:00 GMT
	I0108 23:19:00.994772  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:01.491139  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:01.491169  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:01.491178  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:01.491184  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:01.493914  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:01.493928  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:01.493934  420066 round_trippers.go:580]     Audit-Id: 371b4ca3-ca12-48fe-b71d-15bc47edda07
	I0108 23:19:01.493940  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:01.493945  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:01.493950  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:01.493955  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:01.493960  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:01 GMT
	I0108 23:19:01.494322  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:01.991572  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:01.991603  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:01.991613  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:01.991619  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:01.994509  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:01.994531  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:01.994538  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:01.994547  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:01 GMT
	I0108 23:19:01.994554  420066 round_trippers.go:580]     Audit-Id: 492e8c2f-c458-434c-aff9-1c67e4258603
	I0108 23:19:01.994562  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:01.994570  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:01.994587  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:01.995079  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:02.491795  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:02.491819  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:02.491827  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:02.491834  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:02.495432  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:02.495458  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:02.495469  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:02.495477  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:02.495486  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:02.495494  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:02.495502  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:02 GMT
	I0108 23:19:02.495510  420066 round_trippers.go:580]     Audit-Id: 97290b92-0a1d-4d9e-b7d5-1cd6ae8e69b7
	I0108 23:19:02.495983  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:02.991593  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:02.991619  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:02.991628  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:02.991634  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:02.995053  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:02.995081  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:02.995091  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:02.995099  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:02.995108  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:02.995115  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:02 GMT
	I0108 23:19:02.995122  420066 round_trippers.go:580]     Audit-Id: 7bbf438f-59ba-4d9a-822b-e4c131848a72
	I0108 23:19:02.995134  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:02.995420  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:02.995777  420066 node_ready.go:58] node "multinode-266395-m02" has status "Ready":"False"
	I0108 23:19:03.491071  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:03.491098  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:03.491107  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:03.491114  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:03.494030  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:03.494095  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:03.494120  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:03 GMT
	I0108 23:19:03.494131  420066 round_trippers.go:580]     Audit-Id: b28b5cfc-5485-4580-b8cc-e2765b849830
	I0108 23:19:03.494142  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:03.494153  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:03.494165  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:03.494175  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:03.494376  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:03.992050  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:03.992084  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:03.992093  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:03.992099  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:03.994961  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:03.994987  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:03.994997  420066 round_trippers.go:580]     Audit-Id: e6378e8c-a02f-4d63-bfbc-edf79e8163d8
	I0108 23:19:03.995005  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:03.995014  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:03.995022  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:03.995030  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:03.995038  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:03 GMT
	I0108 23:19:03.995206  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:04.491949  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:04.491977  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:04.491985  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:04.491991  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:04.495058  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:04.495085  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:04.495095  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:04.495104  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:04.495112  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:04.495118  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:04.495125  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:04 GMT
	I0108 23:19:04.495135  420066 round_trippers.go:580]     Audit-Id: 9a4b2bd7-c16f-4ee9-8e6a-86e10f83f63f
	I0108 23:19:04.495441  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:04.991107  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:04.991138  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:04.991146  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:04.991152  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:04.993799  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:04.993820  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:04.993829  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:04.993838  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:04 GMT
	I0108 23:19:04.993846  420066 round_trippers.go:580]     Audit-Id: da94f1db-b5e0-49b2-8bb1-bcb353d68f1e
	I0108 23:19:04.993855  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:04.993862  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:04.993873  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:04.994007  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"472","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3168 chars]
	I0108 23:19:05.491659  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:05.491685  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.491694  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.491700  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.495189  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:05.495215  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.495223  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.495232  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.495240  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.495247  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.495256  420066 round_trippers.go:580]     Audit-Id: e698c098-fc63-4749-a1d3-25d96f7a3526
	I0108 23:19:05.495265  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.495590  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"499","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3134 chars]
	I0108 23:19:05.495969  420066 node_ready.go:49] node "multinode-266395-m02" has status "Ready":"True"
	I0108 23:19:05.495994  420066 node_ready.go:38] duration metric: took 9.005143428s waiting for node "multinode-266395-m02" to be "Ready" ...
	I0108 23:19:05.496008  420066 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:19:05.496104  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:19:05.496116  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.496127  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.496136  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.500891  420066 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:19:05.500909  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.500923  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.500936  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.500952  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.500961  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.500970  420066 round_trippers.go:580]     Audit-Id: a81012c4-fbbd-4933-8804-3ce73ce06e66
	I0108 23:19:05.500976  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.501918  420066 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"500"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"414","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67332 chars]
	I0108 23:19:05.503932  420066 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.504019  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:19:05.504026  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.504034  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.504042  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.506060  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:05.506078  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.506087  420066 round_trippers.go:580]     Audit-Id: 5737958d-fdd9-4c28-8a1a-157db2efff4a
	I0108 23:19:05.506094  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.506102  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.506120  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.506131  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.506140  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.506333  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"414","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0108 23:19:05.506751  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:05.506764  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.506771  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.506777  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.509333  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:05.509351  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.509364  420066 round_trippers.go:580]     Audit-Id: d964ce94-0c44-4cfe-8964-192637222d1d
	I0108 23:19:05.509373  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.509382  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.509387  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.509393  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.509398  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.509516  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:19:05.509888  420066 pod_ready.go:92] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:05.509907  420066 pod_ready.go:81] duration metric: took 5.955159ms waiting for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.509919  420066 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.509985  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266395
	I0108 23:19:05.510001  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.510012  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.510024  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.512256  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:05.512273  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.512280  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.512286  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.512297  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.512305  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.512315  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.512323  420066 round_trippers.go:580]     Audit-Id: 75cc861a-b3e4-4138-bdc5-a796571b4f38
	I0108 23:19:05.512425  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"400","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0108 23:19:05.512759  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:05.512772  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.512779  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.512785  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.514648  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:19:05.514668  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.514678  420066 round_trippers.go:580]     Audit-Id: fdc1c951-a01f-4885-89c7-f4a57b0e643c
	I0108 23:19:05.514684  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.514690  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.514695  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.514701  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.514709  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.514851  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:19:05.515147  420066 pod_ready.go:92] pod "etcd-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:05.515163  420066 pod_ready.go:81] duration metric: took 5.229957ms waiting for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.515175  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.515222  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266395
	I0108 23:19:05.515229  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.515235  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.515241  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.517403  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:05.517420  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.517426  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.517432  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.517437  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.517444  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.517453  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.517470  420066 round_trippers.go:580]     Audit-Id: c617a442-0023-41e8-84ea-6bcbd55aaafa
	I0108 23:19:05.517600  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266395","namespace":"kube-system","uid":"70b0f39e-3999-4a5b-bae6-c08ae2adeb49","resourceVersion":"401","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.18:8443","kubernetes.io/config.hash":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.mirror":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.seen":"2024-01-08T23:17:58.693588503Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0108 23:19:05.518058  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:05.518075  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.518082  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.518088  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.519766  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:19:05.519783  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.519789  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.519795  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.519800  420066 round_trippers.go:580]     Audit-Id: 9ef15519-a262-4044-abae-6c05ae6806a7
	I0108 23:19:05.519805  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.519810  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.519815  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.519938  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:19:05.520282  420066 pod_ready.go:92] pod "kube-apiserver-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:05.520299  420066 pod_ready.go:81] duration metric: took 5.118168ms waiting for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.520308  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.520361  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266395
	I0108 23:19:05.520369  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.520376  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.520389  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.522330  420066 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:19:05.522344  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.522349  420066 round_trippers.go:580]     Audit-Id: 97c4d555-359c-4d7c-abee-5d5eec3a243c
	I0108 23:19:05.522355  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.522360  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.522365  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.522371  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.522375  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.522518  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266395","namespace":"kube-system","uid":"32b7c02b-f69c-46ac-ab67-d61a4077b5b2","resourceVersion":"403","creationTimestamp":"2024-01-08T23:17:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.mirror":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.seen":"2024-01-08T23:17:49.571485221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0108 23:19:05.522885  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:05.522897  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.522904  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.522909  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.524929  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:05.524943  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.524949  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.524954  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.524959  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.524964  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.524970  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.524978  420066 round_trippers.go:580]     Audit-Id: 21a61fbd-3bf0-4676-9b25-764896622338
	I0108 23:19:05.525516  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:19:05.525770  420066 pod_ready.go:92] pod "kube-controller-manager-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:05.525781  420066 pod_ready.go:81] duration metric: took 5.466271ms waiting for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.525791  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.692174  420066 request.go:629] Waited for 166.320736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:19:05.692271  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:19:05.692278  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.692307  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.692324  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.695101  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:05.695127  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.695138  420066 round_trippers.go:580]     Audit-Id: c34542bc-f40f-48f8-a4c9-4a9d342cf7af
	I0108 23:19:05.695147  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.695156  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.695164  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.695173  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.695182  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.695635  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lvmgf","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c37677d-6832-4d6b-8f29-c23d25347535","resourceVersion":"379","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0108 23:19:05.892477  420066 request.go:629] Waited for 196.381596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:05.892559  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:05.892564  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:05.892571  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:05.892585  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:05.895681  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:05.895709  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:05.895720  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:05 GMT
	I0108 23:19:05.895727  420066 round_trippers.go:580]     Audit-Id: 2a0b2c6a-9cf9-4cf4-9de5-06d6ecb28b25
	I0108 23:19:05.895732  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:05.895737  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:05.895742  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:05.895748  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:05.896122  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:19:05.896473  420066 pod_ready.go:92] pod "kube-proxy-lvmgf" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:05.896491  420066 pod_ready.go:81] duration metric: took 370.694984ms waiting for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:05.896502  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:06.092539  420066 request.go:629] Waited for 195.935643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:19:06.092602  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:19:06.092607  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:06.092615  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:06.092622  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:06.096446  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:06.096477  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:06.096485  420066 round_trippers.go:580]     Audit-Id: 03cad56f-7a93-40a1-8077-7ca4ca3c9597
	I0108 23:19:06.096490  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:06.096495  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:06.096500  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:06.096508  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:06.096523  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:06 GMT
	I0108 23:19:06.096795  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v4q5n","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ef0ea4c-f518-4179-9c48-4e1628a9752b","resourceVersion":"487","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 23:19:06.292819  420066 request.go:629] Waited for 195.481282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:06.292943  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:19:06.292951  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:06.292967  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:06.292979  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:06.295591  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:06.295612  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:06.295619  420066 round_trippers.go:580]     Audit-Id: 70fe4e78-2b58-43b1-be74-04eb84a4c9dd
	I0108 23:19:06.295625  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:06.295630  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:06.295640  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:06.295648  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:06.295659  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:06 GMT
	I0108 23:19:06.295797  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"499","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_18_55_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3134 chars]
	I0108 23:19:06.296052  420066 pod_ready.go:92] pod "kube-proxy-v4q5n" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:06.296065  420066 pod_ready.go:81] duration metric: took 399.556799ms waiting for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:06.296074  420066 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:06.492181  420066 request.go:629] Waited for 196.014937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:19:06.492269  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:19:06.492279  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:06.492291  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:06.492311  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:06.495383  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:06.495405  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:06.495412  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:06.495418  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:06 GMT
	I0108 23:19:06.495423  420066 round_trippers.go:580]     Audit-Id: a361ecac-d8cc-4c8a-b680-e1deb0875a38
	I0108 23:19:06.495428  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:06.495433  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:06.495438  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:06.495907  420066 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266395","namespace":"kube-system","uid":"df5e2822-435f-4264-854b-929b6acccd99","resourceVersion":"402","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.mirror":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.seen":"2024-01-08T23:17:58.693594221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0108 23:19:06.692630  420066 request.go:629] Waited for 196.212324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:06.692693  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:19:06.692701  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:06.692709  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:06.692718  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:06.695487  420066 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:19:06.695514  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:06.695521  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:06.695526  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:06 GMT
	I0108 23:19:06.695532  420066 round_trippers.go:580]     Audit-Id: 67ef96b0-71fb-4296-ad6f-7d98ae96594f
	I0108 23:19:06.695537  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:06.695542  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:06.695548  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:06.695757  420066 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0108 23:19:06.696157  420066 pod_ready.go:92] pod "kube-scheduler-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:19:06.696178  420066 pod_ready.go:81] duration metric: took 400.097952ms waiting for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:19:06.696188  420066 pod_ready.go:38] duration metric: took 1.200161457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:19:06.696202  420066 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:19:06.696254  420066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:19:06.710148  420066 system_svc.go:56] duration metric: took 13.931722ms WaitForService to wait for kubelet.
	I0108 23:19:06.710183  420066 kubeadm.go:581] duration metric: took 10.305329245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:19:06.710211  420066 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:19:06.892540  420066 request.go:629] Waited for 182.233459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0108 23:19:06.892620  420066 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0108 23:19:06.892625  420066 round_trippers.go:469] Request Headers:
	I0108 23:19:06.892633  420066 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:19:06.892639  420066 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:19:06.895710  420066 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:19:06.895737  420066 round_trippers.go:577] Response Headers:
	I0108 23:19:06.895746  420066 round_trippers.go:580]     Audit-Id: 936bc60d-6c32-41b5-b287-df6d92c0b142
	I0108 23:19:06.895755  420066 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:19:06.895762  420066 round_trippers.go:580]     Content-Type: application/json
	I0108 23:19:06.895768  420066 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:19:06.895776  420066 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:19:06.895784  420066 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:19:06 GMT
	I0108 23:19:06.896284  420066 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"390","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10077 chars]
	I0108 23:19:06.896892  420066 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:19:06.896913  420066 node_conditions.go:123] node cpu capacity is 2
	I0108 23:19:06.896930  420066 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:19:06.896937  420066 node_conditions.go:123] node cpu capacity is 2
	I0108 23:19:06.896948  420066 node_conditions.go:105] duration metric: took 186.731092ms to run NodePressure ...
	I0108 23:19:06.896967  420066 start.go:228] waiting for startup goroutines ...
	I0108 23:19:06.896999  420066 start.go:242] writing updated cluster config ...
	I0108 23:19:06.897353  420066 ssh_runner.go:195] Run: rm -f paused
	I0108 23:19:06.949061  420066 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 23:19:06.951976  420066 out.go:177] * Done! kubectl is now configured to use "multinode-266395" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 23:17:26 UTC, ends at Mon 2024-01-08 23:19:14 UTC. --
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.497226945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755954497211165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c5b6ed33-db1e-4747-80a0-7851c3871fe9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.498050096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a980d33-3e58-4dc5-98bd-02fa8a4f411d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.498123614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a980d33-3e58-4dc5-98bd-02fa8a4f411d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.498321755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c32e71e119970090722e595571b6608ca57ee550242e00b3d27d89415e2d2857,PodSandboxId:a32f996ff896fa8d05b136abbef42e19512fcfca12706acf51d55e3931f6feed,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704755950873020185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67caacdd857dad9f49d7a52235333d6ada6075c129150360c30a897c040747c,PodSandboxId:3420bfd53e1cb1df59f899a9e23763cf0b1e114637c184cd7397a6cbdc849fc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704755898911221616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833e465912d3c65181727e9b5583a70fbc16308bb36f725a862c9e573c35dca,PodSandboxId:7bbf9c85ee9f4be128f82a5a759a9690a3a5ba588308dbdc2f73e18836ba120f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755898594983529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51ba302b7f116655bae9fb776c6dbd075c1ee424e286e886d5a5ab7c5a23c99,PodSandboxId:765365c9d7efbb59756f56b1acd3e884818f2ed0f91a144a41ed2374b5088f58,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704755895995375923,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00be4437b136a074f89fa55625e5f1188571928b625c687a587ae3307ab74e59,PodSandboxId:50c61b465b6366279b0d3569ddc4d95369e7784241482177f699815614ebe9c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704755893474471047,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d25
347535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb253899af56294250c05702c4dfc5c666682ad97e52e2ecf19bdd1ffe2283ba,PodSandboxId:616c9e094bca53bfc36c65f6df814a518b5049e7ac8e05332b2a502127d1a470,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704755871473471015,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e86823fd3bea6910fd5393cb282aced83078c25a99bc94426b4da7ae47a96f,PodSandboxId:3cfae4bf7e2e22ae7f09087536b53294054aca8df93fea75f9bc6960210479c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704755871169403433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.container.h
ash: 2fa57b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a1765b7913b06e61c0efcdc4588878c1a0385165afc016c0e2ea1ccda4921aa,PodSandboxId:a3162377bd17366c67d13b2256f115d5058b33508928153eff8330a90f12c2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704755870683398847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9,PodSandboxId:c2878cd7a176d5d4789fd76b7d50ae703441e492bc965b2e726b44f2bbccae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704755870547854596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes
.container.hash: 9187ed9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a980d33-3e58-4dc5-98bd-02fa8a4f411d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.536021438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9ac6b0bb-484d-487d-b1ca-5b9ab4b87671 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.536102798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9ac6b0bb-484d-487d-b1ca-5b9ab4b87671 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.537750529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6ee24b3d-f7a9-49fd-aea6-8d62da08ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.538212545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755954538198148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6ee24b3d-f7a9-49fd-aea6-8d62da08ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.538749075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2875e728-3882-43cc-876c-64ff86317837 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.538832592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2875e728-3882-43cc-876c-64ff86317837 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.539092220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c32e71e119970090722e595571b6608ca57ee550242e00b3d27d89415e2d2857,PodSandboxId:a32f996ff896fa8d05b136abbef42e19512fcfca12706acf51d55e3931f6feed,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704755950873020185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67caacdd857dad9f49d7a52235333d6ada6075c129150360c30a897c040747c,PodSandboxId:3420bfd53e1cb1df59f899a9e23763cf0b1e114637c184cd7397a6cbdc849fc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704755898911221616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833e465912d3c65181727e9b5583a70fbc16308bb36f725a862c9e573c35dca,PodSandboxId:7bbf9c85ee9f4be128f82a5a759a9690a3a5ba588308dbdc2f73e18836ba120f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755898594983529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51ba302b7f116655bae9fb776c6dbd075c1ee424e286e886d5a5ab7c5a23c99,PodSandboxId:765365c9d7efbb59756f56b1acd3e884818f2ed0f91a144a41ed2374b5088f58,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704755895995375923,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00be4437b136a074f89fa55625e5f1188571928b625c687a587ae3307ab74e59,PodSandboxId:50c61b465b6366279b0d3569ddc4d95369e7784241482177f699815614ebe9c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704755893474471047,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d25
347535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb253899af56294250c05702c4dfc5c666682ad97e52e2ecf19bdd1ffe2283ba,PodSandboxId:616c9e094bca53bfc36c65f6df814a518b5049e7ac8e05332b2a502127d1a470,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704755871473471015,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e86823fd3bea6910fd5393cb282aced83078c25a99bc94426b4da7ae47a96f,PodSandboxId:3cfae4bf7e2e22ae7f09087536b53294054aca8df93fea75f9bc6960210479c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704755871169403433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.container.h
ash: 2fa57b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a1765b7913b06e61c0efcdc4588878c1a0385165afc016c0e2ea1ccda4921aa,PodSandboxId:a3162377bd17366c67d13b2256f115d5058b33508928153eff8330a90f12c2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704755870683398847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9,PodSandboxId:c2878cd7a176d5d4789fd76b7d50ae703441e492bc965b2e726b44f2bbccae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704755870547854596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes
.container.hash: 9187ed9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2875e728-3882-43cc-876c-64ff86317837 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.580874872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6678b405-c270-41e6-9fd8-6ddd4cea1f64 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.581038267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6678b405-c270-41e6-9fd8-6ddd4cea1f64 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.583020865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f92a9159-ff0b-4bae-b5d5-3df53d08d314 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.583398535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755954583385806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f92a9159-ff0b-4bae-b5d5-3df53d08d314 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.584167907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=59ead5fd-87bb-4c2a-9115-03926a66cff3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.584239492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=59ead5fd-87bb-4c2a-9115-03926a66cff3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.584423764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c32e71e119970090722e595571b6608ca57ee550242e00b3d27d89415e2d2857,PodSandboxId:a32f996ff896fa8d05b136abbef42e19512fcfca12706acf51d55e3931f6feed,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704755950873020185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67caacdd857dad9f49d7a52235333d6ada6075c129150360c30a897c040747c,PodSandboxId:3420bfd53e1cb1df59f899a9e23763cf0b1e114637c184cd7397a6cbdc849fc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704755898911221616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833e465912d3c65181727e9b5583a70fbc16308bb36f725a862c9e573c35dca,PodSandboxId:7bbf9c85ee9f4be128f82a5a759a9690a3a5ba588308dbdc2f73e18836ba120f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755898594983529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51ba302b7f116655bae9fb776c6dbd075c1ee424e286e886d5a5ab7c5a23c99,PodSandboxId:765365c9d7efbb59756f56b1acd3e884818f2ed0f91a144a41ed2374b5088f58,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704755895995375923,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00be4437b136a074f89fa55625e5f1188571928b625c687a587ae3307ab74e59,PodSandboxId:50c61b465b6366279b0d3569ddc4d95369e7784241482177f699815614ebe9c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704755893474471047,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d25
347535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb253899af56294250c05702c4dfc5c666682ad97e52e2ecf19bdd1ffe2283ba,PodSandboxId:616c9e094bca53bfc36c65f6df814a518b5049e7ac8e05332b2a502127d1a470,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704755871473471015,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e86823fd3bea6910fd5393cb282aced83078c25a99bc94426b4da7ae47a96f,PodSandboxId:3cfae4bf7e2e22ae7f09087536b53294054aca8df93fea75f9bc6960210479c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704755871169403433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.container.h
ash: 2fa57b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a1765b7913b06e61c0efcdc4588878c1a0385165afc016c0e2ea1ccda4921aa,PodSandboxId:a3162377bd17366c67d13b2256f115d5058b33508928153eff8330a90f12c2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704755870683398847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9,PodSandboxId:c2878cd7a176d5d4789fd76b7d50ae703441e492bc965b2e726b44f2bbccae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704755870547854596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes
.container.hash: 9187ed9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=59ead5fd-87bb-4c2a-9115-03926a66cff3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.619988474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=56673689-2a7d-4fa3-abc3-7d91910eb749 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.620071563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=56673689-2a7d-4fa3-abc3-7d91910eb749 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.621619527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=94968c16-c0ac-42c8-89ab-e28efb2aef1c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.622156496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704755954622141528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=94968c16-c0ac-42c8-89ab-e28efb2aef1c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.623198606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8e039e1d-0721-4d3b-93b9-1c222926baaa name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.623270397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8e039e1d-0721-4d3b-93b9-1c222926baaa name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:19:14 multinode-266395 crio[720]: time="2024-01-08 23:19:14.623455894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c32e71e119970090722e595571b6608ca57ee550242e00b3d27d89415e2d2857,PodSandboxId:a32f996ff896fa8d05b136abbef42e19512fcfca12706acf51d55e3931f6feed,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704755950873020185,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b67caacdd857dad9f49d7a52235333d6ada6075c129150360c30a897c040747c,PodSandboxId:3420bfd53e1cb1df59f899a9e23763cf0b1e114637c184cd7397a6cbdc849fc4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704755898911221616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f833e465912d3c65181727e9b5583a70fbc16308bb36f725a862c9e573c35dca,PodSandboxId:7bbf9c85ee9f4be128f82a5a759a9690a3a5ba588308dbdc2f73e18836ba120f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704755898594983529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e51ba302b7f116655bae9fb776c6dbd075c1ee424e286e886d5a5ab7c5a23c99,PodSandboxId:765365c9d7efbb59756f56b1acd3e884818f2ed0f91a144a41ed2374b5088f58,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704755895995375923,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00be4437b136a074f89fa55625e5f1188571928b625c687a587ae3307ab74e59,PodSandboxId:50c61b465b6366279b0d3569ddc4d95369e7784241482177f699815614ebe9c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704755893474471047,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d25
347535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb253899af56294250c05702c4dfc5c666682ad97e52e2ecf19bdd1ffe2283ba,PodSandboxId:616c9e094bca53bfc36c65f6df814a518b5049e7ac8e05332b2a502127d1a470,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704755871473471015,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e86823fd3bea6910fd5393cb282aced83078c25a99bc94426b4da7ae47a96f,PodSandboxId:3cfae4bf7e2e22ae7f09087536b53294054aca8df93fea75f9bc6960210479c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704755871169403433,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.container.h
ash: 2fa57b81,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a1765b7913b06e61c0efcdc4588878c1a0385165afc016c0e2ea1ccda4921aa,PodSandboxId:a3162377bd17366c67d13b2256f115d5058b33508928153eff8330a90f12c2ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704755870683398847,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9,PodSandboxId:c2878cd7a176d5d4789fd76b7d50ae703441e492bc965b2e726b44f2bbccae9e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704755870547854596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes
.container.hash: 9187ed9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e039e1d-0721-4d3b-93b9-1c222926baaa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c32e71e119970       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   a32f996ff896f       busybox-5bc68d56bd-nl6pn
	b67caacdd857d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   3420bfd53e1cb       coredns-5dd5756b68-r8pvw
	f833e465912d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       0                   7bbf9c85ee9f4       storage-provisioner
	e51ba302b7f11       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   765365c9d7efb       kindnet-mnltq
	00be4437b136a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   50c61b465b636       kube-proxy-lvmgf
	cb253899af562       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   616c9e094bca5       kube-scheduler-multinode-266395
	22e86823fd3be       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   3cfae4bf7e2e2       etcd-multinode-266395
	4a1765b7913b0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   a3162377bd173       kube-controller-manager-multinode-266395
	c155a8bd6659d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   c2878cd7a176d       kube-apiserver-multinode-266395
	
	
	==> coredns [b67caacdd857dad9f49d7a52235333d6ada6075c129150360c30a897c040747c] <==
	[INFO] 10.244.0.3:38303 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202347s
	[INFO] 10.244.1.2:52317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164245s
	[INFO] 10.244.1.2:51176 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001921976s
	[INFO] 10.244.1.2:38072 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137586s
	[INFO] 10.244.1.2:51125 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099299s
	[INFO] 10.244.1.2:36995 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001372922s
	[INFO] 10.244.1.2:51277 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
	[INFO] 10.244.1.2:56813 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105333s
	[INFO] 10.244.1.2:48650 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074272s
	[INFO] 10.244.0.3:44528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072746s
	[INFO] 10.244.0.3:60103 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000302889s
	[INFO] 10.244.0.3:49156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074244s
	[INFO] 10.244.0.3:54951 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089173s
	[INFO] 10.244.1.2:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137992s
	[INFO] 10.244.1.2:40740 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166124s
	[INFO] 10.244.1.2:50153 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011737s
	[INFO] 10.244.1.2:42789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000225886s
	[INFO] 10.244.0.3:39128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176368s
	[INFO] 10.244.0.3:56398 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210582s
	[INFO] 10.244.0.3:47531 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109148s
	[INFO] 10.244.0.3:42664 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011664s
	[INFO] 10.244.1.2:37291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017606s
	[INFO] 10.244.1.2:52542 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123227s
	[INFO] 10.244.1.2:47626 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000225522s
	[INFO] 10.244.1.2:47668 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071813s
	
	
	==> describe nodes <==
	Name:               multinode-266395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-266395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T23_17_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:17:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-266395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:19:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:18:17 +0000   Mon, 08 Jan 2024 23:17:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:18:17 +0000   Mon, 08 Jan 2024 23:17:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:18:17 +0000   Mon, 08 Jan 2024 23:17:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:18:17 +0000   Mon, 08 Jan 2024 23:18:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    multinode-266395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d716d46d1ee14efebe781ccf0f9b5f7a
	  System UUID:                d716d46d-1ee1-4efe-be78-1ccf0f9b5f7a
	  Boot ID:                    96612030-cef8-446c-979b-e90760baf492
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nl6pn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-r8pvw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 etcd-multinode-266395                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kindnet-mnltq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      63s
	  kube-system                 kube-apiserver-multinode-266395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-multinode-266395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-lvmgf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-multinode-266395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 61s   kube-proxy       
	  Normal  Starting                 76s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s   kubelet          Node multinode-266395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s   kubelet          Node multinode-266395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s   kubelet          Node multinode-266395 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s   node-controller  Node multinode-266395 event: Registered Node multinode-266395 in Controller
	  Normal  NodeReady                57s   kubelet          Node multinode-266395 status is now: NodeReady
	
	
	Name:               multinode-266395-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-266395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T23_18_55_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:18:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-266395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:19:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:19:05 +0000   Mon, 08 Jan 2024 23:18:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:19:05 +0000   Mon, 08 Jan 2024 23:18:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:19:05 +0000   Mon, 08 Jan 2024 23:18:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:19:05 +0000   Mon, 08 Jan 2024 23:19:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    multinode-266395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ceecfa7e5c428fa6b8738e4703b92d
	  System UUID:                65ceecfa-7e5c-428f-a6b8-738e4703b92d
	  Boot ID:                    04dd2ce7-2fa0-4124-926c-344f5d0f9405
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wz22p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-fcjt6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19s
	  kube-system                 kube-proxy-v4q5n            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  RegisteredNode           19s                node-controller  Node multinode-266395-m02 event: Registered Node multinode-266395-m02 in Controller
	  Normal  NodeHasSufficientMemory  19s (x5 over 21s)  kubelet          Node multinode-266395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x5 over 21s)  kubelet          Node multinode-266395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x5 over 21s)  kubelet          Node multinode-266395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9s                 kubelet          Node multinode-266395-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan 8 23:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068902] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.368778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.478193] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154570] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.079780] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.979678] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.104533] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.132667] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.096651] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.202411] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +9.598836] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +9.289698] systemd-fstab-generator[1261]: Ignoring "noauto" for root device
	[Jan 8 23:18] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [22e86823fd3bea6910fd5393cb282aced83078c25a99bc94426b4da7ae47a96f] <==
	{"level":"info","ts":"2024-01-08T23:17:53.159434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgVoteResp from d6d01a71dfc61a14 at term 2"}
	{"level":"info","ts":"2024-01-08T23:17:53.15946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T23:17:53.159487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d6d01a71dfc61a14 elected leader d6d01a71dfc61a14 at term 2"}
	{"level":"info","ts":"2024-01-08T23:17:53.16108Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:17:53.162471Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d6d01a71dfc61a14","local-member-attributes":"{Name:multinode-266395 ClientURLs:[https://192.168.39.18:2379]}","request-path":"/0/members/d6d01a71dfc61a14/attributes","cluster-id":"3959cc3c468ccbd1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T23:17:53.162657Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:17:53.163766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.18:2379"}
	{"level":"info","ts":"2024-01-08T23:17:53.163845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:17:53.164633Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T23:17:53.164745Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3959cc3c468ccbd1","local-member-id":"d6d01a71dfc61a14","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:17:53.164826Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:17:53.164863Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:17:53.165171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T23:17:53.16521Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T23:18:55.748877Z","caller":"traceutil/trace.go:171","msg":"trace[266981007] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"147.06281ms","start":"2024-01-08T23:18:55.601767Z","end":"2024-01-08T23:18:55.74883Z","steps":["trace[266981007] 'process raft request'  (duration: 146.919974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T23:18:56.149622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.311173ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1879281887047132847 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-fcjt6\" mod_revision:458 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-fcjt6\" value_size:4657 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-fcjt6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T23:18:56.150254Z","caller":"traceutil/trace.go:171","msg":"trace[697909117] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"384.742228ms","start":"2024-01-08T23:18:55.765489Z","end":"2024-01-08T23:18:56.150231Z","steps":["trace[697909117] 'process raft request'  (duration: 170.944255ms)","trace[697909117] 'compare'  (duration: 212.105964ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T23:18:56.150399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T23:18:55.765473Z","time spent":"384.861654ms","remote":"127.0.0.1:46906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4705,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-fcjt6\" mod_revision:458 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-fcjt6\" value_size:4657 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-fcjt6\" > >"}
	{"level":"info","ts":"2024-01-08T23:18:56.149882Z","caller":"traceutil/trace.go:171","msg":"trace[1396362588] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"199.899271ms","start":"2024-01-08T23:18:55.949878Z","end":"2024-01-08T23:18:56.149777Z","steps":["trace[1396362588] 'read index received'  (duration: 227.491µs)","trace[1396362588] 'applied index is now lower than readState.Index'  (duration: 199.670186ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T23:18:56.152107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.235005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/kindnet\" ","response":"range_response_count:1 size:902"}
	{"level":"info","ts":"2024-01-08T23:18:56.152196Z","caller":"traceutil/trace.go:171","msg":"trace[1765871790] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:1; response_revision:474; }","duration":"202.329671ms","start":"2024-01-08T23:18:55.949853Z","end":"2024-01-08T23:18:56.152183Z","steps":["trace[1765871790] 'agreement among raft nodes before linearized reading'  (duration: 202.190599ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T23:18:56.372888Z","caller":"traceutil/trace.go:171","msg":"trace[1313129516] linearizableReadLoop","detail":"{readStateIndex:496; appliedIndex:495; }","duration":"213.042607ms","start":"2024-01-08T23:18:56.159832Z","end":"2024-01-08T23:18:56.372874Z","steps":["trace[1313129516] 'read index received'  (duration: 118.348996ms)","trace[1313129516] 'applied index is now lower than readState.Index'  (duration: 94.693068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T23:18:56.373181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.352249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/kindnet\" ","response":"range_response_count:1 size:859"}
	{"level":"info","ts":"2024-01-08T23:18:56.373255Z","caller":"traceutil/trace.go:171","msg":"trace[926670354] range","detail":"{range_begin:/registry/clusterrolebindings/kindnet; range_end:; response_count:1; response_revision:475; }","duration":"213.410912ms","start":"2024-01-08T23:18:56.159808Z","end":"2024-01-08T23:18:56.373219Z","steps":["trace[926670354] 'agreement among raft nodes before linearized reading'  (duration: 213.312655ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T23:18:56.37328Z","caller":"traceutil/trace.go:171","msg":"trace[1386162385] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"220.152719ms","start":"2024-01-08T23:18:56.153113Z","end":"2024-01-08T23:18:56.373266Z","steps":["trace[1386162385] 'process raft request'  (duration: 125.122633ms)","trace[1386162385] 'compare'  (duration: 94.502416ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:19:14 up 1 min,  0 users,  load average: 0.85, 0.41, 0.15
	Linux multinode-266395 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [e51ba302b7f116655bae9fb776c6dbd075c1ee424e286e886d5a5ab7c5a23c99] <==
	I0108 23:18:16.845551       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0108 23:18:16.845640       1 main.go:107] hostIP = 192.168.39.18
	podIP = 192.168.39.18
	I0108 23:18:16.846008       1 main.go:116] setting mtu 1500 for CNI 
	I0108 23:18:16.846050       1 main.go:146] kindnetd IP family: "ipv4"
	I0108 23:18:16.846073       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0108 23:18:17.541392       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:18:17.541443       1 main.go:227] handling current node
	I0108 23:18:27.653771       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:18:27.653975       1 main.go:227] handling current node
	I0108 23:18:37.663579       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:18:37.663798       1 main.go:227] handling current node
	I0108 23:18:47.677795       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:18:47.677858       1 main.go:227] handling current node
	I0108 23:18:57.686253       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:18:57.686350       1 main.go:227] handling current node
	I0108 23:18:57.686376       1 main.go:223] Handling node with IPs: map[192.168.39.214:{}]
	I0108 23:18:57.686395       1 main.go:250] Node multinode-266395-m02 has CIDR [10.244.1.0/24] 
	I0108 23:18:57.686644       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.214 Flags: [] Table: 0} 
	I0108 23:19:07.691639       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:19:07.691690       1 main.go:227] handling current node
	I0108 23:19:07.691709       1 main.go:223] Handling node with IPs: map[192.168.39.214:{}]
	I0108 23:19:07.691716       1 main.go:250] Node multinode-266395-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9] <==
	I0108 23:17:55.203659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 23:17:55.203707       1 aggregator.go:166] initial CRD sync complete...
	I0108 23:17:55.203713       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 23:17:55.203717       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 23:17:55.203722       1 cache.go:39] Caches are synced for autoregister controller
	I0108 23:17:55.203834       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 23:17:55.203876       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 23:17:55.213081       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 23:17:55.236555       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 23:17:55.256487       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 23:17:56.082635       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 23:17:56.088801       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 23:17:56.088846       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 23:17:56.727138       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 23:17:56.774440       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 23:17:56.923630       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 23:17:56.930757       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.18]
	I0108 23:17:56.931730       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 23:17:56.936428       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 23:17:57.188179       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 23:17:58.523857       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 23:17:58.556711       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 23:17:58.573270       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 23:18:10.953535       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 23:18:10.999084       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4a1765b7913b06e61c0efcdc4588878c1a0385165afc016c0e2ea1ccda4921aa] <==
	I0108 23:18:11.595675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.072µs"
	I0108 23:18:17.784871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="251.599µs"
	I0108 23:18:17.809215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.088µs"
	I0108 23:18:19.895369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="123.726µs"
	I0108 23:18:19.940798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.541294ms"
	I0108 23:18:19.941751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="719.516µs"
	I0108 23:18:20.301783       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 23:18:55.117052       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-266395-m02\" does not exist"
	I0108 23:18:55.131653       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-266395-m02" podCIDRs=["10.244.1.0/24"]
	I0108 23:18:55.144422       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fcjt6"
	I0108 23:18:55.163069       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v4q5n"
	I0108 23:18:55.308484       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-266395-m02"
	I0108 23:18:55.308826       1 event.go:307] "Event occurred" object="multinode-266395-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-266395-m02 event: Registered Node multinode-266395-m02 in Controller"
	I0108 23:19:05.269638       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m02"
	I0108 23:19:07.704773       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 23:19:07.721200       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wz22p"
	I0108 23:19:07.731498       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-nl6pn"
	I0108 23:19:07.754630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.777869ms"
	I0108 23:19:07.776343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.639851ms"
	I0108 23:19:07.803190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.750369ms"
	I0108 23:19:07.803283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40.639µs"
	I0108 23:19:09.800770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.155455ms"
	I0108 23:19:09.801011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="141.558µs"
	I0108 23:19:11.066882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.715167ms"
	I0108 23:19:11.067169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.035µs"
	
	
	==> kube-proxy [00be4437b136a074f89fa55625e5f1188571928b625c687a587ae3307ab74e59] <==
	I0108 23:18:13.675570       1 server_others.go:69] "Using iptables proxy"
	I0108 23:18:13.689982       1 node.go:141] Successfully retrieved node IP: 192.168.39.18
	I0108 23:18:13.745042       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 23:18:13.745085       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 23:18:13.748252       1 server_others.go:152] "Using iptables Proxier"
	I0108 23:18:13.748351       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 23:18:13.748576       1 server.go:846] "Version info" version="v1.28.4"
	I0108 23:18:13.748610       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 23:18:13.750714       1 config.go:188] "Starting service config controller"
	I0108 23:18:13.753199       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 23:18:13.753380       1 config.go:97] "Starting endpoint slice config controller"
	I0108 23:18:13.753466       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 23:18:13.750783       1 config.go:315] "Starting node config controller"
	I0108 23:18:13.753657       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 23:18:13.854128       1 shared_informer.go:318] Caches are synced for node config
	I0108 23:18:13.854214       1 shared_informer.go:318] Caches are synced for service config
	I0108 23:18:13.854224       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cb253899af56294250c05702c4dfc5c666682ad97e52e2ecf19bdd1ffe2283ba] <==
	E0108 23:17:55.238263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 23:17:55.238619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 23:17:55.238630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:17:55.238750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 23:17:55.238878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 23:17:55.238885       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 23:17:56.062627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 23:17:56.062794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 23:17:56.083487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 23:17:56.083591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 23:17:56.105830       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 23:17:56.105976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 23:17:56.204169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 23:17:56.204226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 23:17:56.231885       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 23:17:56.232027       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 23:17:56.241497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 23:17:56.241548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 23:17:56.315190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 23:17:56.315278       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 23:17:56.317576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 23:17:56.317596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 23:17:56.358180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 23:17:56.358326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0108 23:17:59.406237       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 23:17:26 UTC, ends at Mon 2024-01-08 23:19:15 UTC. --
	Jan 08 23:18:12 multinode-266395 kubelet[1268]: E0108 23:18:12.364657    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c65752e0-cd30-49cf-9645-5befeecc3d34-kube-api-access-2xzfx podName:c65752e0-cd30-49cf-9645-5befeecc3d34 nodeName:}" failed. No retries permitted until 2024-01-08 23:18:12.864587424 +0000 UTC m=+14.365346856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2xzfx" (UniqueName: "kubernetes.io/projected/c65752e0-cd30-49cf-9645-5befeecc3d34-kube-api-access-2xzfx") pod "kindnet-mnltq" (UID: "c65752e0-cd30-49cf-9645-5befeecc3d34") : failed to sync configmap cache: timed out waiting for the condition
	Jan 08 23:18:12 multinode-266395 kubelet[1268]: E0108 23:18:12.441516    1268 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jan 08 23:18:12 multinode-266395 kubelet[1268]: E0108 23:18:12.441643    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c37677d-6832-4d6b-8f29-c23d25347535-kube-proxy podName:9c37677d-6832-4d6b-8f29-c23d25347535 nodeName:}" failed. No retries permitted until 2024-01-08 23:18:12.941622275 +0000 UTC m=+14.442381696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9c37677d-6832-4d6b-8f29-c23d25347535-kube-proxy") pod "kube-proxy-lvmgf" (UID: "9c37677d-6832-4d6b-8f29-c23d25347535") : failed to sync configmap cache: timed out waiting for the condition
	Jan 08 23:18:16 multinode-266395 kubelet[1268]: I0108 23:18:16.874698    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lvmgf" podStartSLOduration=5.874614108 podCreationTimestamp="2024-01-08 23:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:18:13.871770701 +0000 UTC m=+15.372530140" watchObservedRunningTime="2024-01-08 23:18:16.874614108 +0000 UTC m=+18.375373548"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.740768    1268 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.783334    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mnltq" podStartSLOduration=6.783278982 podCreationTimestamp="2024-01-08 23:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:18:16.876239922 +0000 UTC m=+18.376999347" watchObservedRunningTime="2024-01-08 23:18:17.783278982 +0000 UTC m=+19.284038421"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.783664    1268 topology_manager.go:215] "Topology Admit Handler" podUID="5300c187-4f1f-4330-ae19-6bf2855763f2" podNamespace="kube-system" podName="coredns-5dd5756b68-r8pvw"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.791113    1268 topology_manager.go:215] "Topology Admit Handler" podUID="f15dcd0d-59b5-4f16-94c7-425f162c60ad" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.885341    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbhx8\" (UniqueName: \"kubernetes.io/projected/5300c187-4f1f-4330-ae19-6bf2855763f2-kube-api-access-lbhx8\") pod \"coredns-5dd5756b68-r8pvw\" (UID: \"5300c187-4f1f-4330-ae19-6bf2855763f2\") " pod="kube-system/coredns-5dd5756b68-r8pvw"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.885385    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f15dcd0d-59b5-4f16-94c7-425f162c60ad-tmp\") pod \"storage-provisioner\" (UID: \"f15dcd0d-59b5-4f16-94c7-425f162c60ad\") " pod="kube-system/storage-provisioner"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.885416    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5300c187-4f1f-4330-ae19-6bf2855763f2-config-volume\") pod \"coredns-5dd5756b68-r8pvw\" (UID: \"5300c187-4f1f-4330-ae19-6bf2855763f2\") " pod="kube-system/coredns-5dd5756b68-r8pvw"
	Jan 08 23:18:17 multinode-266395 kubelet[1268]: I0108 23:18:17.885440    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96d2k\" (UniqueName: \"kubernetes.io/projected/f15dcd0d-59b5-4f16-94c7-425f162c60ad-kube-api-access-96d2k\") pod \"storage-provisioner\" (UID: \"f15dcd0d-59b5-4f16-94c7-425f162c60ad\") " pod="kube-system/storage-provisioner"
	Jan 08 23:18:19 multinode-266395 kubelet[1268]: I0108 23:18:19.893757    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.893719215 podCreationTimestamp="2024-01-08 23:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:18:18.9241227 +0000 UTC m=+20.424882172" watchObservedRunningTime="2024-01-08 23:18:19.893719215 +0000 UTC m=+21.394478655"
	Jan 08 23:18:19 multinode-266395 kubelet[1268]: I0108 23:18:19.921724    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-r8pvw" podStartSLOduration=8.921660995 podCreationTimestamp="2024-01-08 23:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 23:18:19.898530313 +0000 UTC m=+21.399289773" watchObservedRunningTime="2024-01-08 23:18:19.921660995 +0000 UTC m=+21.422420434"
	Jan 08 23:18:58 multinode-266395 kubelet[1268]: E0108 23:18:58.849395    1268 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 23:18:58 multinode-266395 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 23:18:58 multinode-266395 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 23:18:58 multinode-266395 kubelet[1268]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 23:19:07 multinode-266395 kubelet[1268]: I0108 23:19:07.760562    1268 topology_manager.go:215] "Topology Admit Handler" podUID="72697c77-17fa-4588-9f0f-c41eaad79e47" podNamespace="default" podName="busybox-5bc68d56bd-nl6pn"
	Jan 08 23:19:07 multinode-266395 kubelet[1268]: W0108 23:19:07.765423    1268 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-266395" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-266395' and this object
	Jan 08 23:19:07 multinode-266395 kubelet[1268]: E0108 23:19:07.765469    1268 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-266395" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-266395' and this object
	Jan 08 23:19:07 multinode-266395 kubelet[1268]: I0108 23:19:07.919485    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgcqb\" (UniqueName: \"kubernetes.io/projected/72697c77-17fa-4588-9f0f-c41eaad79e47-kube-api-access-qgcqb\") pod \"busybox-5bc68d56bd-nl6pn\" (UID: \"72697c77-17fa-4588-9f0f-c41eaad79e47\") " pod="default/busybox-5bc68d56bd-nl6pn"
	Jan 08 23:19:09 multinode-266395 kubelet[1268]: E0108 23:19:09.027376    1268 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 08 23:19:09 multinode-266395 kubelet[1268]: E0108 23:19:09.027441    1268 projected.go:198] Error preparing data for projected volume kube-api-access-qgcqb for pod default/busybox-5bc68d56bd-nl6pn: failed to sync configmap cache: timed out waiting for the condition
	Jan 08 23:19:09 multinode-266395 kubelet[1268]: E0108 23:19:09.027542    1268 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72697c77-17fa-4588-9f0f-c41eaad79e47-kube-api-access-qgcqb podName:72697c77-17fa-4588-9f0f-c41eaad79e47 nodeName:}" failed. No retries permitted until 2024-01-08 23:19:09.527506799 +0000 UTC m=+71.028266220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qgcqb" (UniqueName: "kubernetes.io/projected/72697c77-17fa-4588-9f0f-c41eaad79e47-kube-api-access-qgcqb") pod "busybox-5bc68d56bd-nl6pn" (UID: "72697c77-17fa-4588-9f0f-c41eaad79e47") : failed to sync configmap cache: timed out waiting for the condition
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-266395 -n multinode-266395
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-266395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (691.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-266395
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-266395
E0108 23:20:49.610163  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:21:13.677720  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:22:12.658955  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-266395: exit status 82 (2m1.202428533s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-266395"  ...
	* Stopping node "multinode-266395"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-266395" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266395 --wait=true -v=8 --alsologtostderr
E0108 23:24:19.628224  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:25:49.610308  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:26:13.677929  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:27:36.725844  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:29:19.627845  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:30:42.674790  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:30:49.610968  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:31:13.678216  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266395 --wait=true -v=8 --alsologtostderr: (9m27.2657276s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-266395
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-266395 -n multinode-266395
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-266395 logs -n 25: (1.665292341s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m02:/home/docker/cp-test.txt                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3286421314/001/cp-test_multinode-266395-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m02:/home/docker/cp-test.txt                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395:/home/docker/cp-test_multinode-266395-m02_multinode-266395.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n multinode-266395 sudo cat                                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /home/docker/cp-test_multinode-266395-m02_multinode-266395.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m02:/home/docker/cp-test.txt                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03:/home/docker/cp-test_multinode-266395-m02_multinode-266395-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n multinode-266395-m03 sudo cat                                   | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /home/docker/cp-test_multinode-266395-m02_multinode-266395-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp testdata/cp-test.txt                                                | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3286421314/001/cp-test_multinode-266395-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395:/home/docker/cp-test_multinode-266395-m03_multinode-266395.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n multinode-266395 sudo cat                                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /home/docker/cp-test_multinode-266395-m03_multinode-266395.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt                       | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m02:/home/docker/cp-test_multinode-266395-m03_multinode-266395-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n multinode-266395-m02 sudo cat                                   | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /home/docker/cp-test_multinode-266395-m03_multinode-266395-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-266395 node stop m03                                                          | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	| node    | multinode-266395 node start                                                             | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-266395                                                                | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC |                     |
	| stop    | -p multinode-266395                                                                     | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC |                     |
	| start   | -p multinode-266395                                                                     | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:22 UTC | 08 Jan 24 23:32 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-266395                                                                | multinode-266395 | jenkins | v1.32.0 | 08 Jan 24 23:32 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 23:22:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 23:22:39.206864  423858 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:22:39.207037  423858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:22:39.207052  423858 out.go:309] Setting ErrFile to fd 2...
	I0108 23:22:39.207060  423858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:22:39.207263  423858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:22:39.207875  423858 out.go:303] Setting JSON to false
	I0108 23:22:39.208909  423858 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14685,"bootTime":1704741474,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:22:39.208976  423858 start.go:138] virtualization: kvm guest
	I0108 23:22:39.211704  423858 out.go:177] * [multinode-266395] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:22:39.213197  423858 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:22:39.213280  423858 notify.go:220] Checking for updates...
	I0108 23:22:39.214638  423858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:22:39.216323  423858 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:22:39.218065  423858 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:22:39.219533  423858 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:22:39.220950  423858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:22:39.222817  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:22:39.222919  423858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:22:39.223421  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:22:39.223481  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:22:39.242076  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46245
	I0108 23:22:39.242589  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:22:39.243129  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:22:39.243150  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:22:39.243579  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:22:39.243743  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:22:39.279057  423858 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 23:22:39.280334  423858 start.go:298] selected driver: kvm2
	I0108 23:22:39.280351  423858 start.go:902] validating driver "kvm2" against &{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:22:39.280500  423858 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:22:39.280814  423858 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:22:39.280906  423858 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 23:22:39.295608  423858 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 23:22:39.296290  423858 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 23:22:39.296349  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:22:39.296357  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:22:39.296368  423858 start_flags.go:323] config:
	{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:22:39.296594  423858 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:22:39.298600  423858 out.go:177] * Starting control plane node multinode-266395 in cluster multinode-266395
	I0108 23:22:39.300159  423858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:22:39.300202  423858 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 23:22:39.300216  423858 cache.go:56] Caching tarball of preloaded images
	I0108 23:22:39.300305  423858 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:22:39.300318  423858 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:22:39.300473  423858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:22:39.300684  423858 start.go:365] acquiring machines lock for multinode-266395: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:22:39.300748  423858 start.go:369] acquired machines lock for "multinode-266395" in 40.788µs
	I0108 23:22:39.300774  423858 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:22:39.300782  423858 fix.go:54] fixHost starting: 
	I0108 23:22:39.301050  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:22:39.301093  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:22:39.314946  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0108 23:22:39.315377  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:22:39.315845  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:22:39.315875  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:22:39.316200  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:22:39.316383  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:22:39.316539  423858 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:22:39.318109  423858 fix.go:102] recreateIfNeeded on multinode-266395: state=Running err=<nil>
	W0108 23:22:39.318138  423858 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:22:39.320091  423858 out.go:177] * Updating the running kvm2 "multinode-266395" VM ...
	I0108 23:22:39.321383  423858 machine.go:88] provisioning docker machine ...
	I0108 23:22:39.321406  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:22:39.321638  423858 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:22:39.321804  423858 buildroot.go:166] provisioning hostname "multinode-266395"
	I0108 23:22:39.321823  423858 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:22:39.322067  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:22:39.324595  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:22:39.325096  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:22:39.325126  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:22:39.325294  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:22:39.325484  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:22:39.325606  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:22:39.325727  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:22:39.325849  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:22:39.326205  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:22:39.326220  423858 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266395 && echo "multinode-266395" | sudo tee /etc/hostname
	I0108 23:22:57.763603  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:03.843762  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:06.915639  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:12.995687  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:16.067673  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:22.151641  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:25.219678  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:31.299648  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:34.371727  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:40.451685  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:43.523650  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:49.603699  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:52.675766  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:23:58.755678  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:01.827656  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:07.907781  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:10.979608  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:17.059730  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:20.131691  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:26.211720  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:29.283716  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:35.363680  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:38.435676  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:44.515752  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:47.587714  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:53.667664  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:24:56.739692  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:02.819708  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:05.891632  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:11.971688  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:15.043625  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:21.123673  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:24.195695  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:30.275670  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:33.347662  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:39.427645  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:42.499656  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:48.579680  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:51.651655  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:25:57.731694  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:00.803663  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:06.883677  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:09.955627  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:16.035674  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:19.107694  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:25.187637  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:28.259615  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:34.339703  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:37.411652  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:43.491765  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:46.563654  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:52.643774  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:26:55.715658  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:01.795700  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:04.867658  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:10.947685  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:14.019617  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:20.099682  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:23.171661  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:29.251672  423858 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I0108 23:27:32.252640  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:27:32.252684  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:32.254613  423858 machine.go:91] provisioned docker machine in 4m52.933208432s
	I0108 23:27:32.254666  423858 fix.go:56] fixHost completed within 4m52.953884511s
	I0108 23:27:32.254675  423858 start.go:83] releasing machines lock for "multinode-266395", held for 4m52.953909517s
	W0108 23:27:32.254697  423858 start.go:694] error starting host: provision: host is not running
	W0108 23:27:32.254853  423858 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 23:27:32.254866  423858 start.go:709] Will try again in 5 seconds ...
	I0108 23:27:37.256653  423858 start.go:365] acquiring machines lock for multinode-266395: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:27:37.256830  423858 start.go:369] acquired machines lock for "multinode-266395" in 77.784µs
	I0108 23:27:37.256870  423858 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:27:37.256879  423858 fix.go:54] fixHost starting: 
	I0108 23:27:37.257166  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:27:37.257192  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:27:37.272653  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0108 23:27:37.273150  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:27:37.273749  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:27:37.273776  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:27:37.274122  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:27:37.274289  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:37.274425  423858 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:27:37.275982  423858 fix.go:102] recreateIfNeeded on multinode-266395: state=Stopped err=<nil>
	I0108 23:27:37.276005  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	W0108 23:27:37.276150  423858 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:27:37.278416  423858 out.go:177] * Restarting existing kvm2 VM for "multinode-266395" ...
	I0108 23:27:37.279842  423858 main.go:141] libmachine: (multinode-266395) Calling .Start
	I0108 23:27:37.279998  423858 main.go:141] libmachine: (multinode-266395) Ensuring networks are active...
	I0108 23:27:37.280946  423858 main.go:141] libmachine: (multinode-266395) Ensuring network default is active
	I0108 23:27:37.281246  423858 main.go:141] libmachine: (multinode-266395) Ensuring network mk-multinode-266395 is active
	I0108 23:27:37.281697  423858 main.go:141] libmachine: (multinode-266395) Getting domain xml...
	I0108 23:27:37.282535  423858 main.go:141] libmachine: (multinode-266395) Creating domain...
	I0108 23:27:38.501244  423858 main.go:141] libmachine: (multinode-266395) Waiting to get IP...
	I0108 23:27:38.502352  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:38.502797  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:38.502860  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:38.502768  424668 retry.go:31] will retry after 242.127222ms: waiting for machine to come up
	I0108 23:27:38.746363  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:38.746947  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:38.746976  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:38.746908  424668 retry.go:31] will retry after 306.118515ms: waiting for machine to come up
	I0108 23:27:39.054489  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:39.055002  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:39.055038  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:39.054911  424668 retry.go:31] will retry after 468.568238ms: waiting for machine to come up
	I0108 23:27:39.525475  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:39.525915  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:39.525945  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:39.525871  424668 retry.go:31] will retry after 371.866245ms: waiting for machine to come up
	I0108 23:27:39.899470  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:39.899927  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:39.899958  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:39.899883  424668 retry.go:31] will retry after 556.559956ms: waiting for machine to come up
	I0108 23:27:40.457640  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:40.457988  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:40.458025  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:40.457937  424668 retry.go:31] will retry after 680.536703ms: waiting for machine to come up
	I0108 23:27:41.140012  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:41.140437  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:41.140457  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:41.140414  424668 retry.go:31] will retry after 897.45743ms: waiting for machine to come up
	I0108 23:27:42.039606  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:42.040094  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:42.040127  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:42.040024  424668 retry.go:31] will retry after 1.352043128s: waiting for machine to come up
	I0108 23:27:43.393920  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:43.394408  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:43.394439  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:43.394350  424668 retry.go:31] will retry after 1.230027391s: waiting for machine to come up
	I0108 23:27:44.625743  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:44.626211  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:44.626240  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:44.626159  424668 retry.go:31] will retry after 2.193479971s: waiting for machine to come up
	I0108 23:27:46.821409  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:46.821834  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:46.821866  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:46.821774  424668 retry.go:31] will retry after 2.605156252s: waiting for machine to come up
	I0108 23:27:49.428128  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:49.428670  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:49.428700  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:49.428642  424668 retry.go:31] will retry after 2.98369757s: waiting for machine to come up
	I0108 23:27:52.414092  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:52.414482  423858 main.go:141] libmachine: (multinode-266395) DBG | unable to find current IP address of domain multinode-266395 in network mk-multinode-266395
	I0108 23:27:52.414511  423858 main.go:141] libmachine: (multinode-266395) DBG | I0108 23:27:52.414431  424668 retry.go:31] will retry after 3.801265802s: waiting for machine to come up
	I0108 23:27:56.220406  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.220929  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has current primary IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.220978  423858 main.go:141] libmachine: (multinode-266395) Found IP for machine: 192.168.39.18
	I0108 23:27:56.220994  423858 main.go:141] libmachine: (multinode-266395) Reserving static IP address...
	I0108 23:27:56.221454  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "multinode-266395", mac: "52:54:00:64:1d:b6", ip: "192.168.39.18"} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.221485  423858 main.go:141] libmachine: (multinode-266395) Reserved static IP address: 192.168.39.18
	I0108 23:27:56.221507  423858 main.go:141] libmachine: (multinode-266395) DBG | skip adding static IP to network mk-multinode-266395 - found existing host DHCP lease matching {name: "multinode-266395", mac: "52:54:00:64:1d:b6", ip: "192.168.39.18"}
	I0108 23:27:56.221525  423858 main.go:141] libmachine: (multinode-266395) DBG | Getting to WaitForSSH function...
	I0108 23:27:56.221542  423858 main.go:141] libmachine: (multinode-266395) Waiting for SSH to be available...
	I0108 23:27:56.224038  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.224401  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.224429  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.224595  423858 main.go:141] libmachine: (multinode-266395) DBG | Using SSH client type: external
	I0108 23:27:56.224624  423858 main.go:141] libmachine: (multinode-266395) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa (-rw-------)
	I0108 23:27:56.224675  423858 main.go:141] libmachine: (multinode-266395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 23:27:56.224700  423858 main.go:141] libmachine: (multinode-266395) DBG | About to run SSH command:
	I0108 23:27:56.224717  423858 main.go:141] libmachine: (multinode-266395) DBG | exit 0
	I0108 23:27:56.318806  423858 main.go:141] libmachine: (multinode-266395) DBG | SSH cmd err, output: <nil>: 
	I0108 23:27:56.319172  423858 main.go:141] libmachine: (multinode-266395) Calling .GetConfigRaw
	I0108 23:27:56.319877  423858 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:27:56.322113  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.322476  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.322513  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.322833  423858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:27:56.323036  423858 machine.go:88] provisioning docker machine ...
	I0108 23:27:56.323054  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:56.323292  423858 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:27:56.323502  423858 buildroot.go:166] provisioning hostname "multinode-266395"
	I0108 23:27:56.323525  423858 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:27:56.323699  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:56.325728  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.326100  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.326132  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.326234  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:56.326416  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.326564  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.326701  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:56.326859  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:27:56.327238  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:27:56.327251  423858 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266395 && echo "multinode-266395" | sudo tee /etc/hostname
	I0108 23:27:56.463761  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266395
	
	I0108 23:27:56.463787  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:56.466817  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.467180  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.467237  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.467398  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:56.467650  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.467844  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.468016  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:56.468241  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:27:56.468706  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:27:56.468735  423858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266395/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:27:56.602037  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:27:56.602085  423858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:27:56.602113  423858 buildroot.go:174] setting up certificates
	I0108 23:27:56.602146  423858 provision.go:83] configureAuth start
	I0108 23:27:56.602159  423858 main.go:141] libmachine: (multinode-266395) Calling .GetMachineName
	I0108 23:27:56.602474  423858 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:27:56.605131  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.605540  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.605575  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.605669  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:56.607593  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.607935  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.607964  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.608127  423858 provision.go:138] copyHostCerts
	I0108 23:27:56.608169  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:27:56.608220  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:27:56.608233  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:27:56.608318  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:27:56.608435  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:27:56.608474  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:27:56.608485  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:27:56.608527  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:27:56.608596  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:27:56.608618  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:27:56.608627  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:27:56.608660  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:27:56.608725  423858 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.multinode-266395 san=[192.168.39.18 192.168.39.18 localhost 127.0.0.1 minikube multinode-266395]
	I0108 23:27:56.672460  423858 provision.go:172] copyRemoteCerts
	I0108 23:27:56.672534  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:27:56.672559  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:56.675126  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.675499  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.675525  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.675727  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:56.675942  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.676174  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:56.676335  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:27:56.768332  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:27:56.768428  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:27:56.790819  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:27:56.790889  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 23:27:56.819322  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:27:56.819404  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 23:27:56.842571  423858 provision.go:86] duration metric: configureAuth took 240.404922ms
	I0108 23:27:56.842605  423858 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:27:56.842865  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:27:56.842950  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:56.845407  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.845724  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:56.845749  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:56.845956  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:56.846163  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.846326  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:56.846497  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:56.846647  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:27:56.846986  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:27:56.847008  423858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:27:57.155912  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:27:57.155944  423858 machine.go:91] provisioned docker machine in 832.893619ms
	I0108 23:27:57.155972  423858 start.go:300] post-start starting for "multinode-266395" (driver="kvm2")
	I0108 23:27:57.155989  423858 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:27:57.156021  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:57.156409  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:27:57.156442  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:57.159123  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.159568  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:57.159600  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.159755  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:57.159937  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:57.160081  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:57.160191  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:27:57.249040  423858 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:27:57.252807  423858 command_runner.go:130] > NAME=Buildroot
	I0108 23:27:57.252829  423858 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 23:27:57.252836  423858 command_runner.go:130] > ID=buildroot
	I0108 23:27:57.252844  423858 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 23:27:57.252852  423858 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 23:27:57.252943  423858 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:27:57.252965  423858 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:27:57.253042  423858 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:27:57.253111  423858 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:27:57.253120  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /etc/ssl/certs/4070942.pem
	I0108 23:27:57.253202  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:27:57.261048  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:27:57.283321  423858 start.go:303] post-start completed in 127.330288ms
	I0108 23:27:57.283351  423858 fix.go:56] fixHost completed within 20.026471289s
	I0108 23:27:57.283406  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:57.285994  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.286406  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:57.286430  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.286652  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:57.286851  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:57.287039  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:57.287191  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:57.287379  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:27:57.287729  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0108 23:27:57.287742  423858 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:27:57.412191  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704756477.359156454
	
	I0108 23:27:57.412220  423858 fix.go:206] guest clock: 1704756477.359156454
	I0108 23:27:57.412234  423858 fix.go:219] Guest: 2024-01-08 23:27:57.359156454 +0000 UTC Remote: 2024-01-08 23:27:57.283383794 +0000 UTC m=+318.130472783 (delta=75.77266ms)
	I0108 23:27:57.412261  423858 fix.go:190] guest clock delta is within tolerance: 75.77266ms
	I0108 23:27:57.412268  423858 start.go:83] releasing machines lock for "multinode-266395", held for 20.155427283s
	I0108 23:27:57.412307  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:57.412563  423858 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:27:57.415177  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.415584  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:57.415607  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.415797  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:57.416312  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:57.416503  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:27:57.416600  423858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:27:57.416658  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:57.416763  423858 ssh_runner.go:195] Run: cat /version.json
	I0108 23:27:57.416793  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:27:57.419376  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.419563  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.419807  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:57.419839  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.419925  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:57.419957  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:57.419985  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:57.420146  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:57.420153  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:27:57.420341  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:27:57.420369  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:57.420482  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:27:57.420494  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:27:57.420601  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:27:57.529955  423858 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:27:57.530865  423858 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0108 23:27:57.531082  423858 ssh_runner.go:195] Run: systemctl --version
	I0108 23:27:57.536683  423858 command_runner.go:130] > systemd 247 (247)
	I0108 23:27:57.536710  423858 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 23:27:57.536882  423858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:27:57.679599  423858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:27:57.685977  423858 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 23:27:57.686215  423858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:27:57.686292  423858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:27:57.701170  423858 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 23:27:57.701232  423858 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:27:57.701246  423858 start.go:475] detecting cgroup driver to use...
	I0108 23:27:57.701312  423858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:27:57.714637  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:27:57.726210  423858 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:27:57.726272  423858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:27:57.738683  423858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:27:57.750882  423858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:27:57.858225  423858 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 23:27:57.858314  423858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:27:57.872055  423858 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 23:27:57.978839  423858 docker.go:219] disabling docker service ...
	I0108 23:27:57.978923  423858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:27:57.993057  423858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:27:58.004212  423858 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 23:27:58.005273  423858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:27:58.018226  423858 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 23:27:58.121202  423858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:27:58.132901  423858 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 23:27:58.133206  423858 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 23:27:58.233321  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:27:58.244584  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:27:58.261453  423858 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:27:58.261955  423858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:27:58.262048  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:27:58.270858  423858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:27:58.270924  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:27:58.279517  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:27:58.287980  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:27:58.296418  423858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:27:58.305701  423858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:27:58.313468  423858 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:27:58.313505  423858 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:27:58.313542  423858 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 23:27:58.325006  423858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:27:58.333009  423858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:27:58.434897  423858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:27:58.592655  423858 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:27:58.592743  423858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:27:58.598097  423858 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:27:58.598124  423858 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:27:58.598135  423858 command_runner.go:130] > Device: 16h/22d	Inode: 807         Links: 1
	I0108 23:27:58.598145  423858 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:27:58.598153  423858 command_runner.go:130] > Access: 2024-01-08 23:27:58.523727036 +0000
	I0108 23:27:58.598163  423858 command_runner.go:130] > Modify: 2024-01-08 23:27:58.523727036 +0000
	I0108 23:27:58.598189  423858 command_runner.go:130] > Change: 2024-01-08 23:27:58.523727036 +0000
	I0108 23:27:58.598196  423858 command_runner.go:130] >  Birth: -
	I0108 23:27:58.598236  423858 start.go:543] Will wait 60s for crictl version
	I0108 23:27:58.598286  423858 ssh_runner.go:195] Run: which crictl
	I0108 23:27:58.601975  423858 command_runner.go:130] > /usr/bin/crictl
	I0108 23:27:58.602033  423858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:27:58.646193  423858 command_runner.go:130] > Version:  0.1.0
	I0108 23:27:58.646219  423858 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:27:58.646223  423858 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 23:27:58.646235  423858 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:27:58.647985  423858 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:27:58.648066  423858 ssh_runner.go:195] Run: crio --version
	I0108 23:27:58.694972  423858 command_runner.go:130] > crio version 1.24.1
	I0108 23:27:58.695004  423858 command_runner.go:130] > Version:          1.24.1
	I0108 23:27:58.695012  423858 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:27:58.695017  423858 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:27:58.695023  423858 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:27:58.695028  423858 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:27:58.695032  423858 command_runner.go:130] > Compiler:         gc
	I0108 23:27:58.695037  423858 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:27:58.695054  423858 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:27:58.695061  423858 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:27:58.695065  423858 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:27:58.695069  423858 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:27:58.696253  423858 ssh_runner.go:195] Run: crio --version
	I0108 23:27:58.745035  423858 command_runner.go:130] > crio version 1.24.1
	I0108 23:27:58.745069  423858 command_runner.go:130] > Version:          1.24.1
	I0108 23:27:58.745078  423858 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:27:58.745082  423858 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:27:58.745089  423858 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:27:58.745098  423858 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:27:58.745102  423858 command_runner.go:130] > Compiler:         gc
	I0108 23:27:58.745106  423858 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:27:58.745114  423858 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:27:58.745122  423858 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:27:58.745126  423858 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:27:58.745130  423858 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:27:58.748512  423858 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 23:27:58.749890  423858 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:27:58.752719  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:58.753075  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:27:58.753104  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:27:58.753357  423858 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:27:58.757494  423858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:27:58.770236  423858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:27:58.770307  423858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:27:58.802864  423858 command_runner.go:130] > {
	I0108 23:27:58.802890  423858 command_runner.go:130] >   "images": [
	I0108 23:27:58.802894  423858 command_runner.go:130] >     {
	I0108 23:27:58.802902  423858 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 23:27:58.802908  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:27:58.802913  423858 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 23:27:58.802917  423858 command_runner.go:130] >       ],
	I0108 23:27:58.802921  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:27:58.802942  423858 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 23:27:58.802951  423858 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 23:27:58.802955  423858 command_runner.go:130] >       ],
	I0108 23:27:58.802960  423858 command_runner.go:130] >       "size": "750414",
	I0108 23:27:58.802966  423858 command_runner.go:130] >       "uid": {
	I0108 23:27:58.802970  423858 command_runner.go:130] >         "value": "65535"
	I0108 23:27:58.802974  423858 command_runner.go:130] >       },
	I0108 23:27:58.802978  423858 command_runner.go:130] >       "username": "",
	I0108 23:27:58.802994  423858 command_runner.go:130] >       "spec": null,
	I0108 23:27:58.803001  423858 command_runner.go:130] >       "pinned": false
	I0108 23:27:58.803004  423858 command_runner.go:130] >     }
	I0108 23:27:58.803008  423858 command_runner.go:130] >   ]
	I0108 23:27:58.803013  423858 command_runner.go:130] > }
	I0108 23:27:58.804248  423858 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 23:27:58.804311  423858 ssh_runner.go:195] Run: which lz4
	I0108 23:27:58.808014  423858 command_runner.go:130] > /usr/bin/lz4
	I0108 23:27:58.808176  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 23:27:58.808277  423858 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 23:27:58.812075  423858 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:27:58.812385  423858 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:27:58.812417  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 23:28:00.627253  423858 crio.go:444] Took 1.819003 seconds to copy over tarball
	I0108 23:28:00.627325  423858 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 23:28:03.433084  423858 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.80572366s)
	I0108 23:28:03.433125  423858 crio.go:451] Took 2.805843 seconds to extract the tarball
	I0108 23:28:03.433140  423858 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 23:28:03.473161  423858 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:28:03.517455  423858 command_runner.go:130] > {
	I0108 23:28:03.517485  423858 command_runner.go:130] >   "images": [
	I0108 23:28:03.517491  423858 command_runner.go:130] >     {
	I0108 23:28:03.517505  423858 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 23:28:03.517512  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.517523  423858 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 23:28:03.517529  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517536  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.517548  423858 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 23:28:03.517568  423858 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 23:28:03.517581  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517589  423858 command_runner.go:130] >       "size": "65258016",
	I0108 23:28:03.517596  423858 command_runner.go:130] >       "uid": null,
	I0108 23:28:03.517621  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.517641  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.517647  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.517653  423858 command_runner.go:130] >     },
	I0108 23:28:03.517659  423858 command_runner.go:130] >     {
	I0108 23:28:03.517669  423858 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 23:28:03.517679  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.517687  423858 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 23:28:03.517696  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517703  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.517719  423858 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 23:28:03.517734  423858 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 23:28:03.517744  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517757  423858 command_runner.go:130] >       "size": "31470524",
	I0108 23:28:03.517768  423858 command_runner.go:130] >       "uid": null,
	I0108 23:28:03.517774  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.517784  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.517791  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.517801  423858 command_runner.go:130] >     },
	I0108 23:28:03.517810  423858 command_runner.go:130] >     {
	I0108 23:28:03.517820  423858 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 23:28:03.517830  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.517838  423858 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 23:28:03.517846  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517853  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.517868  423858 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 23:28:03.517882  423858 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 23:28:03.517891  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517898  423858 command_runner.go:130] >       "size": "53621675",
	I0108 23:28:03.517908  423858 command_runner.go:130] >       "uid": null,
	I0108 23:28:03.517918  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.517928  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.517934  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.517941  423858 command_runner.go:130] >     },
	I0108 23:28:03.517947  423858 command_runner.go:130] >     {
	I0108 23:28:03.517961  423858 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 23:28:03.517975  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.517984  423858 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 23:28:03.517992  423858 command_runner.go:130] >       ],
	I0108 23:28:03.517999  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.518013  423858 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 23:28:03.518028  423858 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 23:28:03.518045  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518054  423858 command_runner.go:130] >       "size": "295456551",
	I0108 23:28:03.518063  423858 command_runner.go:130] >       "uid": {
	I0108 23:28:03.518071  423858 command_runner.go:130] >         "value": "0"
	I0108 23:28:03.518080  423858 command_runner.go:130] >       },
	I0108 23:28:03.518089  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.518098  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.518106  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.518114  423858 command_runner.go:130] >     },
	I0108 23:28:03.518119  423858 command_runner.go:130] >     {
	I0108 23:28:03.518132  423858 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 23:28:03.518143  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.518157  423858 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 23:28:03.518166  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518172  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.518182  423858 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 23:28:03.518192  423858 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 23:28:03.518198  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518203  423858 command_runner.go:130] >       "size": "127226832",
	I0108 23:28:03.518210  423858 command_runner.go:130] >       "uid": {
	I0108 23:28:03.518214  423858 command_runner.go:130] >         "value": "0"
	I0108 23:28:03.518220  423858 command_runner.go:130] >       },
	I0108 23:28:03.518224  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.518228  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.518232  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.518237  423858 command_runner.go:130] >     },
	I0108 23:28:03.518241  423858 command_runner.go:130] >     {
	I0108 23:28:03.518249  423858 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 23:28:03.518254  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.518261  423858 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 23:28:03.518268  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518275  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.518283  423858 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 23:28:03.518293  423858 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 23:28:03.518296  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518301  423858 command_runner.go:130] >       "size": "123261750",
	I0108 23:28:03.518307  423858 command_runner.go:130] >       "uid": {
	I0108 23:28:03.518311  423858 command_runner.go:130] >         "value": "0"
	I0108 23:28:03.518315  423858 command_runner.go:130] >       },
	I0108 23:28:03.518319  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.518326  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.518329  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.518333  423858 command_runner.go:130] >     },
	I0108 23:28:03.518337  423858 command_runner.go:130] >     {
	I0108 23:28:03.518343  423858 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 23:28:03.518349  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.518354  423858 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 23:28:03.518360  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518369  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.518378  423858 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 23:28:03.518389  423858 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 23:28:03.518393  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518401  423858 command_runner.go:130] >       "size": "74749335",
	I0108 23:28:03.518410  423858 command_runner.go:130] >       "uid": null,
	I0108 23:28:03.518414  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.518421  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.518424  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.518428  423858 command_runner.go:130] >     },
	I0108 23:28:03.518432  423858 command_runner.go:130] >     {
	I0108 23:28:03.518440  423858 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 23:28:03.518446  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.518451  423858 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 23:28:03.518457  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518461  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.518483  423858 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 23:28:03.518493  423858 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 23:28:03.518499  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518525  423858 command_runner.go:130] >       "size": "61551410",
	I0108 23:28:03.518541  423858 command_runner.go:130] >       "uid": {
	I0108 23:28:03.518545  423858 command_runner.go:130] >         "value": "0"
	I0108 23:28:03.518549  423858 command_runner.go:130] >       },
	I0108 23:28:03.518559  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.518567  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.518571  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.518577  423858 command_runner.go:130] >     },
	I0108 23:28:03.518581  423858 command_runner.go:130] >     {
	I0108 23:28:03.518587  423858 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 23:28:03.518594  423858 command_runner.go:130] >       "repoTags": [
	I0108 23:28:03.518599  423858 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 23:28:03.518602  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518609  423858 command_runner.go:130] >       "repoDigests": [
	I0108 23:28:03.518616  423858 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 23:28:03.518625  423858 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 23:28:03.518629  423858 command_runner.go:130] >       ],
	I0108 23:28:03.518638  423858 command_runner.go:130] >       "size": "750414",
	I0108 23:28:03.518644  423858 command_runner.go:130] >       "uid": {
	I0108 23:28:03.518649  423858 command_runner.go:130] >         "value": "65535"
	I0108 23:28:03.518653  423858 command_runner.go:130] >       },
	I0108 23:28:03.518657  423858 command_runner.go:130] >       "username": "",
	I0108 23:28:03.518664  423858 command_runner.go:130] >       "spec": null,
	I0108 23:28:03.518668  423858 command_runner.go:130] >       "pinned": false
	I0108 23:28:03.518674  423858 command_runner.go:130] >     }
	I0108 23:28:03.518677  423858 command_runner.go:130] >   ]
	I0108 23:28:03.518682  423858 command_runner.go:130] > }
	I0108 23:28:03.518800  423858 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 23:28:03.518812  423858 cache_images.go:84] Images are preloaded, skipping loading
	I0108 23:28:03.518875  423858 ssh_runner.go:195] Run: crio config
	I0108 23:28:03.576099  423858 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:28:03.576130  423858 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:28:03.576140  423858 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:28:03.576145  423858 command_runner.go:130] > #
	I0108 23:28:03.576164  423858 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:28:03.576175  423858 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:28:03.576185  423858 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:28:03.576194  423858 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:28:03.576200  423858 command_runner.go:130] > # reload'.
	I0108 23:28:03.576210  423858 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:28:03.576219  423858 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:28:03.576231  423858 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:28:03.576242  423858 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:28:03.576250  423858 command_runner.go:130] > [crio]
	I0108 23:28:03.576260  423858 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:28:03.576274  423858 command_runner.go:130] > # containers images, in this directory.
	I0108 23:28:03.576282  423858 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 23:28:03.576298  423858 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:28:03.576306  423858 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 23:28:03.576316  423858 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:28:03.576325  423858 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:28:03.576333  423858 command_runner.go:130] > storage_driver = "overlay"
	I0108 23:28:03.576351  423858 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:28:03.576363  423858 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:28:03.576368  423858 command_runner.go:130] > storage_option = [
	I0108 23:28:03.576374  423858 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 23:28:03.576378  423858 command_runner.go:130] > ]
	I0108 23:28:03.576384  423858 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:28:03.576391  423858 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:28:03.576395  423858 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:28:03.576404  423858 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:28:03.576409  423858 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:28:03.576415  423858 command_runner.go:130] > # always happen on a node reboot
	I0108 23:28:03.576420  423858 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:28:03.576425  423858 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:28:03.576433  423858 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:28:03.576450  423858 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:28:03.576463  423858 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:28:03.576487  423858 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:28:03.576504  423858 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:28:03.576514  423858 command_runner.go:130] > # internal_wipe = true
	I0108 23:28:03.576523  423858 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:28:03.576529  423858 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:28:03.576537  423858 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:28:03.576546  423858 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:28:03.576558  423858 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:28:03.576566  423858 command_runner.go:130] > [crio.api]
	I0108 23:28:03.576578  423858 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:28:03.576589  423858 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:28:03.576602  423858 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:28:03.576618  423858 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:28:03.576631  423858 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:28:03.576639  423858 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:28:03.576647  423858 command_runner.go:130] > # stream_port = "0"
	I0108 23:28:03.576652  423858 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:28:03.576657  423858 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:28:03.576663  423858 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:28:03.576670  423858 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:28:03.576679  423858 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:28:03.576687  423858 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:28:03.576691  423858 command_runner.go:130] > # minutes.
	I0108 23:28:03.576698  423858 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:28:03.576704  423858 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:28:03.576712  423858 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:28:03.576725  423858 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:28:03.576735  423858 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:28:03.576749  423858 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:28:03.576764  423858 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:28:03.576774  423858 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:28:03.576785  423858 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:28:03.576797  423858 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 23:28:03.576809  423858 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:28:03.576819  423858 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 23:28:03.576853  423858 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:28:03.576866  423858 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:28:03.576873  423858 command_runner.go:130] > [crio.runtime]
	I0108 23:28:03.576887  423858 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:28:03.576902  423858 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:28:03.576913  423858 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:28:03.576924  423858 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:28:03.576931  423858 command_runner.go:130] > # default_ulimits = [
	I0108 23:28:03.576937  423858 command_runner.go:130] > # ]
	I0108 23:28:03.576950  423858 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:28:03.576960  423858 command_runner.go:130] > # no_pivot = false
	I0108 23:28:03.576970  423858 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:28:03.576982  423858 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:28:03.576995  423858 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:28:03.577008  423858 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:28:03.577016  423858 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:28:03.577031  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:28:03.577042  423858 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 23:28:03.577051  423858 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:28:03.577064  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:28:03.577074  423858 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:28:03.577088  423858 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:28:03.577101  423858 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:28:03.577115  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:28:03.577124  423858 command_runner.go:130] > conmon_env = [
	I0108 23:28:03.577134  423858 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 23:28:03.577142  423858 command_runner.go:130] > ]
	I0108 23:28:03.577149  423858 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:28:03.577160  423858 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:28:03.577173  423858 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:28:03.577183  423858 command_runner.go:130] > # default_env = [
	I0108 23:28:03.577191  423858 command_runner.go:130] > # ]
	I0108 23:28:03.577200  423858 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:28:03.577210  423858 command_runner.go:130] > # selinux = false
	I0108 23:28:03.577226  423858 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:28:03.577238  423858 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:28:03.577251  423858 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:28:03.577262  423858 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:28:03.577272  423858 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:28:03.577288  423858 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:28:03.577301  423858 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:28:03.577311  423858 command_runner.go:130] > # which might increase security.
	I0108 23:28:03.577319  423858 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 23:28:03.577332  423858 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:28:03.577346  423858 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:28:03.577359  423858 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:28:03.577373  423858 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:28:03.577384  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:28:03.577393  423858 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:28:03.577406  423858 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:28:03.577414  423858 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:28:03.577427  423858 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:28:03.577441  423858 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:28:03.577451  423858 command_runner.go:130] > # irqbalance daemon.
	I0108 23:28:03.577460  423858 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:28:03.577476  423858 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:28:03.577492  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:28:03.577510  423858 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:28:03.577523  423858 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:28:03.577533  423858 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:28:03.577545  423858 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:28:03.577555  423858 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:28:03.577569  423858 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:28:03.577582  423858 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:28:03.577593  423858 command_runner.go:130] > # will be added.
	I0108 23:28:03.577604  423858 command_runner.go:130] > # default_capabilities = [
	I0108 23:28:03.577614  423858 command_runner.go:130] > # 	"CHOWN",
	I0108 23:28:03.577621  423858 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:28:03.577631  423858 command_runner.go:130] > # 	"FSETID",
	I0108 23:28:03.577638  423858 command_runner.go:130] > # 	"FOWNER",
	I0108 23:28:03.577647  423858 command_runner.go:130] > # 	"SETGID",
	I0108 23:28:03.577658  423858 command_runner.go:130] > # 	"SETUID",
	I0108 23:28:03.577665  423858 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:28:03.577676  423858 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:28:03.577685  423858 command_runner.go:130] > # 	"KILL",
	I0108 23:28:03.577695  423858 command_runner.go:130] > # ]
	I0108 23:28:03.577708  423858 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:28:03.577722  423858 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:28:03.577730  423858 command_runner.go:130] > # default_sysctls = [
	I0108 23:28:03.577733  423858 command_runner.go:130] > # ]
	I0108 23:28:03.577745  423858 command_runner.go:130] > # List of devices on the host that a
	I0108 23:28:03.577756  423858 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:28:03.577763  423858 command_runner.go:130] > # allowed_devices = [
	I0108 23:28:03.577770  423858 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:28:03.577776  423858 command_runner.go:130] > # ]
	I0108 23:28:03.577785  423858 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:28:03.577801  423858 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:28:03.577812  423858 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:28:03.577854  423858 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:28:03.577863  423858 command_runner.go:130] > # additional_devices = [
	I0108 23:28:03.577868  423858 command_runner.go:130] > # ]
	I0108 23:28:03.577876  423858 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:28:03.577886  423858 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:28:03.577896  423858 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:28:03.577906  423858 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:28:03.577914  423858 command_runner.go:130] > # ]
	I0108 23:28:03.577924  423858 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:28:03.577935  423858 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:28:03.577945  423858 command_runner.go:130] > # Defaults to false.
	I0108 23:28:03.577953  423858 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:28:03.577966  423858 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:28:03.577977  423858 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:28:03.577984  423858 command_runner.go:130] > # hooks_dir = [
	I0108 23:28:03.577989  423858 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:28:03.577994  423858 command_runner.go:130] > # ]
	I0108 23:28:03.578000  423858 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:28:03.578008  423858 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:28:03.578014  423858 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:28:03.578019  423858 command_runner.go:130] > #
	I0108 23:28:03.578025  423858 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:28:03.578033  423858 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:28:03.578043  423858 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:28:03.578051  423858 command_runner.go:130] > #
	I0108 23:28:03.578060  423858 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:28:03.578073  423858 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:28:03.578088  423858 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:28:03.578099  423858 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:28:03.578107  423858 command_runner.go:130] > #
	I0108 23:28:03.578118  423858 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:28:03.578127  423858 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:28:03.578140  423858 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:28:03.578149  423858 command_runner.go:130] > pids_limit = 1024
	I0108 23:28:03.578166  423858 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:28:03.578179  423858 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:28:03.578192  423858 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:28:03.578208  423858 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:28:03.578218  423858 command_runner.go:130] > # log_size_max = -1
	I0108 23:28:03.578229  423858 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:28:03.578240  423858 command_runner.go:130] > # log_to_journald = false
	I0108 23:28:03.578258  423858 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:28:03.578269  423858 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:28:03.578280  423858 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:28:03.578291  423858 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:28:03.578304  423858 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:28:03.578311  423858 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:28:03.578324  423858 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:28:03.578334  423858 command_runner.go:130] > # read_only = false
	I0108 23:28:03.578345  423858 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:28:03.578358  423858 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:28:03.578368  423858 command_runner.go:130] > # live configuration reload.
	I0108 23:28:03.578377  423858 command_runner.go:130] > # log_level = "info"
	I0108 23:28:03.578386  423858 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:28:03.578392  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:28:03.578399  423858 command_runner.go:130] > # log_filter = ""
	I0108 23:28:03.578405  423858 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:28:03.578413  423858 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:28:03.578417  423858 command_runner.go:130] > # separated by comma.
	I0108 23:28:03.578425  423858 command_runner.go:130] > # uid_mappings = ""
	I0108 23:28:03.578432  423858 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:28:03.578440  423858 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:28:03.578445  423858 command_runner.go:130] > # separated by comma.
	I0108 23:28:03.578451  423858 command_runner.go:130] > # gid_mappings = ""
	I0108 23:28:03.578457  423858 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:28:03.578468  423858 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:28:03.578482  423858 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:28:03.578494  423858 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:28:03.578506  423858 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:28:03.578519  423858 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:28:03.578532  423858 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:28:03.578543  423858 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:28:03.578554  423858 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:28:03.578569  423858 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:28:03.578579  423858 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:28:03.578587  423858 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:28:03.578599  423858 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:28:03.578615  423858 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:28:03.578627  423858 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:28:03.578635  423858 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:28:03.578645  423858 command_runner.go:130] > drop_infra_ctr = false
	I0108 23:28:03.578651  423858 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:28:03.578663  423858 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:28:03.578677  423858 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:28:03.578686  423858 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:28:03.578699  423858 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:28:03.578710  423858 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:28:03.578720  423858 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:28:03.578734  423858 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:28:03.578745  423858 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 23:28:03.578755  423858 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:28:03.578769  423858 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:28:03.578780  423858 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:28:03.578794  423858 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:28:03.578805  423858 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:28:03.578825  423858 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:28:03.578843  423858 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:28:03.578856  423858 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:28:03.578872  423858 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:28:03.578882  423858 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:28:03.578890  423858 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:28:03.578895  423858 command_runner.go:130] > # ]
	I0108 23:28:03.578909  423858 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:28:03.578923  423858 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:28:03.578936  423858 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:28:03.578951  423858 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:28:03.578959  423858 command_runner.go:130] > #
	I0108 23:28:03.578967  423858 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:28:03.578978  423858 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:28:03.578987  423858 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:28:03.578998  423858 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:28:03.579010  423858 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:28:03.579021  423858 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:28:03.579032  423858 command_runner.go:130] > # Where:
	I0108 23:28:03.579044  423858 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:28:03.579056  423858 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:28:03.579065  423858 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:28:03.579075  423858 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:28:03.579085  423858 command_runner.go:130] > #   in $PATH.
	I0108 23:28:03.579096  423858 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:28:03.579108  423858 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:28:03.579121  423858 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:28:03.579128  423858 command_runner.go:130] > #   state.
	I0108 23:28:03.579140  423858 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:28:03.579152  423858 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:28:03.579166  423858 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:28:03.579180  423858 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:28:03.579193  423858 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:28:03.579207  423858 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:28:03.579220  423858 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:28:03.579234  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:28:03.579260  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:28:03.579271  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:28:03.579286  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:28:03.579301  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:28:03.579314  423858 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:28:03.579328  423858 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:28:03.579339  423858 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:28:03.579351  423858 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:28:03.579370  423858 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:28:03.579380  423858 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 23:28:03.579387  423858 command_runner.go:130] > runtime_type = "oci"
	I0108 23:28:03.579397  423858 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:28:03.579406  423858 command_runner.go:130] > runtime_config_path = ""
	I0108 23:28:03.579414  423858 command_runner.go:130] > monitor_path = ""
	I0108 23:28:03.579421  423858 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:28:03.579427  423858 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:28:03.579436  423858 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:28:03.579443  423858 command_runner.go:130] > # running containers
	I0108 23:28:03.579450  423858 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:28:03.579458  423858 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:28:03.579522  423858 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:28:03.579537  423858 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:28:03.579546  423858 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:28:03.579554  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:28:03.579565  423858 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:28:03.579574  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:28:03.579586  423858 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:28:03.579596  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:28:03.579609  423858 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:28:03.579618  423858 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:28:03.579629  423858 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:28:03.579649  423858 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:28:03.579665  423858 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:28:03.579677  423858 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:28:03.579697  423858 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:28:03.579708  423858 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:28:03.579724  423858 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:28:03.579744  423858 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:28:03.579754  423858 command_runner.go:130] > # Example:
	I0108 23:28:03.579765  423858 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:28:03.579776  423858 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:28:03.579787  423858 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:28:03.579799  423858 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:28:03.579806  423858 command_runner.go:130] > # cpuset = 0
	I0108 23:28:03.579810  423858 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:28:03.579819  423858 command_runner.go:130] > # Where:
	I0108 23:28:03.579831  423858 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:28:03.579846  423858 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:28:03.579858  423858 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:28:03.579870  423858 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:28:03.579886  423858 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:28:03.579897  423858 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:28:03.579903  423858 command_runner.go:130] > # 
	I0108 23:28:03.579913  423858 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:28:03.579925  423858 command_runner.go:130] > #
	I0108 23:28:03.579939  423858 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:28:03.579952  423858 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:28:03.579966  423858 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:28:03.579980  423858 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:28:03.579991  423858 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:28:03.579997  423858 command_runner.go:130] > [crio.image]
	I0108 23:28:03.580007  423858 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:28:03.580018  423858 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:28:03.580035  423858 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:28:03.580048  423858 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:28:03.580058  423858 command_runner.go:130] > # global_auth_file = ""
	I0108 23:28:03.580071  423858 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:28:03.580079  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:28:03.580087  423858 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:28:03.580098  423858 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:28:03.580115  423858 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:28:03.580127  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:28:03.580142  423858 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:28:03.580155  423858 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:28:03.580168  423858 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:28:03.580179  423858 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:28:03.580190  423858 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:28:03.580200  423858 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:28:03.580214  423858 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:28:03.580228  423858 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:28:03.580241  423858 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:28:03.580254  423858 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:28:03.580266  423858 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:28:03.580273  423858 command_runner.go:130] > # signature_policy = ""
	I0108 23:28:03.580280  423858 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:28:03.580286  423858 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:28:03.580293  423858 command_runner.go:130] > # changing them here.
	I0108 23:28:03.580300  423858 command_runner.go:130] > # insecure_registries = [
	I0108 23:28:03.580306  423858 command_runner.go:130] > # ]
	I0108 23:28:03.580316  423858 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:28:03.580327  423858 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:28:03.580334  423858 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:28:03.580343  423858 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:28:03.580350  423858 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:28:03.580360  423858 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:28:03.580364  423858 command_runner.go:130] > # CNI plugins.
	I0108 23:28:03.580371  423858 command_runner.go:130] > [crio.network]
	I0108 23:28:03.580380  423858 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:28:03.580390  423858 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:28:03.580397  423858 command_runner.go:130] > # cni_default_network = ""
	I0108 23:28:03.580408  423858 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:28:03.580416  423858 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:28:03.580428  423858 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:28:03.580435  423858 command_runner.go:130] > # plugin_dirs = [
	I0108 23:28:03.580442  423858 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:28:03.580447  423858 command_runner.go:130] > # ]
	I0108 23:28:03.580453  423858 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:28:03.580456  423858 command_runner.go:130] > [crio.metrics]
	I0108 23:28:03.580468  423858 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:28:03.580483  423858 command_runner.go:130] > enable_metrics = true
	I0108 23:28:03.580495  423858 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:28:03.580506  423858 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:28:03.580519  423858 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:28:03.580532  423858 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:28:03.580545  423858 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:28:03.580553  423858 command_runner.go:130] > # metrics_collectors = [
	I0108 23:28:03.580557  423858 command_runner.go:130] > # 	"operations",
	I0108 23:28:03.580567  423858 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:28:03.580579  423858 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:28:03.580589  423858 command_runner.go:130] > # 	"operations_errors",
	I0108 23:28:03.580600  423858 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:28:03.580609  423858 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:28:03.580620  423858 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:28:03.580630  423858 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:28:03.580639  423858 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:28:03.580647  423858 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:28:03.580661  423858 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:28:03.580672  423858 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:28:03.580679  423858 command_runner.go:130] > # 	"containers_oom",
	I0108 23:28:03.580689  423858 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:28:03.580699  423858 command_runner.go:130] > # 	"operations_total",
	I0108 23:28:03.580707  423858 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:28:03.580717  423858 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:28:03.580725  423858 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:28:03.580734  423858 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:28:03.580740  423858 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:28:03.580748  423858 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:28:03.580759  423858 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:28:03.580770  423858 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:28:03.580777  423858 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:28:03.580786  423858 command_runner.go:130] > # ]
	I0108 23:28:03.580796  423858 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:28:03.580805  423858 command_runner.go:130] > # metrics_port = 9090
	I0108 23:28:03.580816  423858 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:28:03.580829  423858 command_runner.go:130] > # metrics_socket = ""
	I0108 23:28:03.580836  423858 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:28:03.580842  423858 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:28:03.580853  423858 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:28:03.580864  423858 command_runner.go:130] > # certificate on any modification event.
	I0108 23:28:03.580872  423858 command_runner.go:130] > # metrics_cert = ""
	I0108 23:28:03.580884  423858 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:28:03.580896  423858 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:28:03.580906  423858 command_runner.go:130] > # metrics_key = ""
	I0108 23:28:03.580916  423858 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:28:03.580928  423858 command_runner.go:130] > [crio.tracing]
	I0108 23:28:03.580940  423858 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:28:03.580950  423858 command_runner.go:130] > # enable_tracing = false
	I0108 23:28:03.580963  423858 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:28:03.580975  423858 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:28:03.580990  423858 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:28:03.581001  423858 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:28:03.581010  423858 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:28:03.581019  423858 command_runner.go:130] > [crio.stats]
	I0108 23:28:03.581026  423858 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:28:03.581036  423858 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:28:03.581047  423858 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:28:03.581082  423858 command_runner.go:130] ! time="2024-01-08 23:28:03.518926987Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 23:28:03.581102  423858 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:28:03.581227  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:28:03.581267  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:28:03.581295  423858 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:28:03.581361  423858 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266395 NodeName:multinode-266395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:28:03.581551  423858 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:28:03.581645  423858 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-266395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:28:03.581711  423858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:28:03.590210  423858 command_runner.go:130] > kubeadm
	I0108 23:28:03.590229  423858 command_runner.go:130] > kubectl
	I0108 23:28:03.590233  423858 command_runner.go:130] > kubelet
	I0108 23:28:03.590381  423858 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:28:03.590446  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 23:28:03.598307  423858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0108 23:28:03.613673  423858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:28:03.629576  423858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0108 23:28:03.645850  423858 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0108 23:28:03.649697  423858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:28:03.661866  423858 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395 for IP: 192.168.39.18
	I0108 23:28:03.661925  423858 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:28:03.662102  423858 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:28:03.662148  423858 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:28:03.662254  423858 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key
	I0108 23:28:03.662313  423858 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key.c202909e
	I0108 23:28:03.662352  423858 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key
	I0108 23:28:03.662363  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 23:28:03.662374  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 23:28:03.662386  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 23:28:03.662398  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 23:28:03.662414  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:28:03.662429  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:28:03.662444  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:28:03.662456  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:28:03.662511  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:28:03.662542  423858 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:28:03.662552  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:28:03.662572  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:28:03.662593  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:28:03.662614  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:28:03.662652  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:28:03.662681  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /usr/share/ca-certificates/4070942.pem
	I0108 23:28:03.662696  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:28:03.662711  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem -> /usr/share/ca-certificates/407094.pem
	I0108 23:28:03.663448  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 23:28:03.685686  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 23:28:03.708666  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 23:28:03.731079  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 23:28:03.753314  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:28:03.775583  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:28:03.797712  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:28:03.822390  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:28:03.846005  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:28:03.869904  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:28:03.892602  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:28:03.915438  423858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 23:28:03.931537  423858 ssh_runner.go:195] Run: openssl version
	I0108 23:28:03.937000  423858 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 23:28:03.937089  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:28:03.946819  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:28:03.951665  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:28:03.951772  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:28:03.951829  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:28:03.957878  423858 command_runner.go:130] > b5213941
	I0108 23:28:03.958050  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:28:03.967789  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:28:03.977453  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:28:03.982022  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:28:03.982051  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:28:03.982108  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:28:03.987177  423858 command_runner.go:130] > 51391683
	I0108 23:28:03.987378  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:28:03.996379  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:28:04.005997  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:28:04.010535  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:28:04.010565  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:28:04.010606  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:28:04.015931  423858 command_runner.go:130] > 3ec20f2e
	I0108 23:28:04.016002  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:28:04.025088  423858 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:28:04.029227  423858 command_runner.go:130] > ca.crt
	I0108 23:28:04.029243  423858 command_runner.go:130] > ca.key
	I0108 23:28:04.029248  423858 command_runner.go:130] > healthcheck-client.crt
	I0108 23:28:04.029253  423858 command_runner.go:130] > healthcheck-client.key
	I0108 23:28:04.029258  423858 command_runner.go:130] > peer.crt
	I0108 23:28:04.029261  423858 command_runner.go:130] > peer.key
	I0108 23:28:04.029265  423858 command_runner.go:130] > server.crt
	I0108 23:28:04.029268  423858 command_runner.go:130] > server.key
	I0108 23:28:04.029395  423858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 23:28:04.035104  423858 command_runner.go:130] > Certificate will not expire
	I0108 23:28:04.035328  423858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 23:28:04.040910  423858 command_runner.go:130] > Certificate will not expire
	I0108 23:28:04.040968  423858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 23:28:04.046770  423858 command_runner.go:130] > Certificate will not expire
	I0108 23:28:04.047241  423858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 23:28:04.053341  423858 command_runner.go:130] > Certificate will not expire
	I0108 23:28:04.053417  423858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 23:28:04.059137  423858 command_runner.go:130] > Certificate will not expire
	I0108 23:28:04.059417  423858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 23:28:04.064937  423858 command_runner.go:130] > Certificate will not expire
	I0108 23:28:04.065419  423858 kubeadm.go:404] StartCluster: {Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:28:04.065534  423858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 23:28:04.065581  423858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:28:04.104621  423858 cri.go:89] found id: ""
	I0108 23:28:04.104718  423858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 23:28:04.114033  423858 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0108 23:28:04.114063  423858 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0108 23:28:04.114082  423858 command_runner.go:130] > /var/lib/minikube/etcd:
	I0108 23:28:04.114089  423858 command_runner.go:130] > member
	I0108 23:28:04.114302  423858 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 23:28:04.114326  423858 kubeadm.go:636] restartCluster start
	I0108 23:28:04.114449  423858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 23:28:04.123016  423858 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:04.123894  423858 kubeconfig.go:92] found "multinode-266395" server: "https://192.168.39.18:8443"
	I0108 23:28:04.124670  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:28:04.124992  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:28:04.125878  423858 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 23:28:04.126337  423858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 23:28:04.134507  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:04.134567  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:04.144630  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:04.635567  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:04.635676  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:04.647253  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:05.134692  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:05.134786  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:05.145544  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:05.635071  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:05.635180  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:05.647108  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:06.134668  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:06.134791  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:06.145720  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:06.635352  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:06.635480  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:06.646286  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:07.135428  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:07.135541  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:07.146275  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:07.635555  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:07.635634  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:07.646418  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:08.134987  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:08.135130  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:08.146019  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:08.635585  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:08.635688  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:08.646424  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:09.134898  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:09.135013  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:09.146008  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:09.634962  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:09.635098  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:09.646809  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:10.135420  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:10.135511  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:10.146063  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:10.634623  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:10.634733  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:10.645576  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:11.135241  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:11.135325  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:11.146156  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:11.634713  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:11.634818  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:11.646638  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:12.135240  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:12.135339  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:12.146947  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:12.634927  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:12.635015  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:12.645730  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:13.135272  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:13.135378  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:13.145981  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:13.635573  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:13.635712  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:13.646698  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:14.135120  423858 api_server.go:166] Checking apiserver status ...
	I0108 23:28:14.135217  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:28:14.146141  423858 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:28:14.146175  423858 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 23:28:14.146188  423858 kubeadm.go:1135] stopping kube-system containers ...
	I0108 23:28:14.146202  423858 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 23:28:14.146282  423858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:28:14.190352  423858 cri.go:89] found id: ""
	I0108 23:28:14.190434  423858 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 23:28:14.206686  423858 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 23:28:14.215263  423858 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 23:28:14.215301  423858 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 23:28:14.215313  423858 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 23:28:14.215323  423858 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:28:14.215395  423858 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:28:14.215454  423858 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 23:28:14.224142  423858 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 23:28:14.224169  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:28:14.338977  423858 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 23:28:14.339455  423858 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 23:28:14.339946  423858 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 23:28:14.340721  423858 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 23:28:14.341190  423858 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0108 23:28:14.341690  423858 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0108 23:28:14.342630  423858 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0108 23:28:14.343047  423858 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0108 23:28:14.343522  423858 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0108 23:28:14.343944  423858 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 23:28:14.344445  423858 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 23:28:14.345161  423858 command_runner.go:130] > [certs] Using the existing "sa" key
	I0108 23:28:14.346449  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:28:14.398433  423858 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 23:28:14.613343  423858 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 23:28:14.695961  423858 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 23:28:14.826409  423858 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 23:28:15.159061  423858 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 23:28:15.163485  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:28:15.231083  423858 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:28:15.232383  423858 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:28:15.232479  423858 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:28:15.345175  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:28:15.414679  423858 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 23:28:15.414716  423858 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 23:28:15.414728  423858 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 23:28:15.414739  423858 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 23:28:15.414772  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:28:15.496119  423858 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 23:28:15.496174  423858 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:28:15.496259  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:15.996342  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:16.497109  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:16.996946  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:17.496958  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:17.996494  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:18.022748  423858 command_runner.go:130] > 1072
	I0108 23:28:18.023018  423858 api_server.go:72] duration metric: took 2.526838913s to wait for apiserver process to appear ...
	I0108 23:28:18.023044  423858 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:28:18.023068  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:20.882696  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 23:28:20.882730  423858 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 23:28:20.882748  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:20.943013  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 23:28:20.943047  423858 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 23:28:21.023198  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:21.030676  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 23:28:21.030706  423858 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 23:28:21.523323  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:21.528330  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 23:28:21.528357  423858 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 23:28:22.024023  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:22.030147  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 23:28:22.030177  423858 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 23:28:22.523247  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:22.533479  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0108 23:28:22.533606  423858 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0108 23:28:22.533617  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:22.533629  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:22.533638  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:22.550325  423858 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0108 23:28:22.550355  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:22.550366  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:22.550375  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:22.550383  423858 round_trippers.go:580]     Content-Length: 264
	I0108 23:28:22.550391  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:22 GMT
	I0108 23:28:22.550399  423858 round_trippers.go:580]     Audit-Id: e540089b-edb7-484f-aabe-2b69b068723f
	I0108 23:28:22.550408  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:22.550420  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:22.550462  423858 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 23:28:22.550599  423858 api_server.go:141] control plane version: v1.28.4
	I0108 23:28:22.550626  423858 api_server.go:131] duration metric: took 4.527573337s to wait for apiserver health ...
	I0108 23:28:22.550671  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:28:22.550683  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:28:22.552406  423858 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 23:28:22.553834  423858 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:28:22.568633  423858 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:28:22.568659  423858 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 23:28:22.568669  423858 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 23:28:22.568680  423858 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:28:22.568689  423858 command_runner.go:130] > Access: 2024-01-08 23:27:50.050727036 +0000
	I0108 23:28:22.568697  423858 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 23:28:22.568709  423858 command_runner.go:130] > Change: 2024-01-08 23:27:48.185727036 +0000
	I0108 23:28:22.568718  423858 command_runner.go:130] >  Birth: -
	I0108 23:28:22.574196  423858 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:28:22.574222  423858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:28:22.619694  423858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:28:23.705480  423858 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:28:23.715730  423858 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:28:23.723963  423858 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 23:28:23.737276  423858 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 23:28:23.740104  423858 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.120357032s)
	I0108 23:28:23.740140  423858 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:28:23.740251  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:23.740262  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:23.740272  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:23.740284  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:23.743785  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:23.743807  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:23.743816  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:23.743824  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:23.743841  423858 round_trippers.go:580]     Audit-Id: c0c02dc4-4344-4eb5-ac4c-e70ad7dd7a77
	I0108 23:28:23.743854  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:23.743864  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:23.743873  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:23.745252  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"804"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82606 chars]
	I0108 23:28:23.749447  423858 system_pods.go:59] 12 kube-system pods found
	I0108 23:28:23.749491  423858 system_pods.go:61] "coredns-5dd5756b68-r8pvw" [5300c187-4f1f-4330-ae19-6bf2855763f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 23:28:23.749499  423858 system_pods.go:61] "etcd-multinode-266395" [ad57572e-a901-4042-b907-d0738c803dbd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 23:28:23.749506  423858 system_pods.go:61] "kindnet-brbnm" [202f1355-7d13-4a76-bf54-82139d5c527a] Running
	I0108 23:28:23.749514  423858 system_pods.go:61] "kindnet-fcjt6" [676370cd-926b-4102-b249-df808216c915] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 23:28:23.749522  423858 system_pods.go:61] "kindnet-mnltq" [c65752e0-cd30-49cf-9645-5befeecc3d34] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 23:28:23.749544  423858 system_pods.go:61] "kube-apiserver-multinode-266395" [70b0f39e-3999-4a5b-bae6-c08ae2adeb49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 23:28:23.749556  423858 system_pods.go:61] "kube-controller-manager-multinode-266395" [32b7c02b-f69c-46ac-ab67-d61a4077b5b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 23:28:23.749563  423858 system_pods.go:61] "kube-proxy-lvmgf" [9c37677d-6832-4d6b-8f29-c23d25347535] Running
	I0108 23:28:23.749567  423858 system_pods.go:61] "kube-proxy-v4q5n" [8ef0ea4c-f518-4179-9c48-4e1628a9752b] Running
	I0108 23:28:23.749574  423858 system_pods.go:61] "kube-proxy-vbq4b" [f4b0965a-b7bc-4a1a-8fc2-1397277c3710] Running
	I0108 23:28:23.749581  423858 system_pods.go:61] "kube-scheduler-multinode-266395" [df5e2822-435f-4264-854b-929b6acccd99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 23:28:23.749585  423858 system_pods.go:61] "storage-provisioner" [f15dcd0d-59b5-4f16-94c7-425f162c60ad] Running
	I0108 23:28:23.749595  423858 system_pods.go:74] duration metric: took 9.446475ms to wait for pod list to return data ...
	I0108 23:28:23.749604  423858 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:28:23.749705  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0108 23:28:23.749716  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:23.749723  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:23.749729  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:23.752437  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:23.752458  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:23.752468  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:23.752476  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:23.752484  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:23.752503  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:23.752513  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:23.752524  423858 round_trippers.go:580]     Audit-Id: 9dc4bcbc-fad2-4168-a2c0-0d00a002f4e3
	I0108 23:28:23.753355  423858 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"804"},"items":[{"metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15786 chars]
	I0108 23:28:23.754193  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:28:23.754246  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:28:23.754257  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:28:23.754264  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:28:23.754268  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:28:23.754278  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:28:23.754282  423858 node_conditions.go:105] duration metric: took 4.670742ms to run NodePressure ...
	I0108 23:28:23.754302  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:28:23.996444  423858 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 23:28:23.996476  423858 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 23:28:23.996514  423858 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 23:28:23.996619  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0108 23:28:23.996628  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:23.996635  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:23.996641  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:23.999451  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:23.999476  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:23.999487  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:23.999508  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:23.999519  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:23.999530  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:23.999542  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:23.999553  423858 round_trippers.go:580]     Audit-Id: eee6d534-0e98-40bf-ad68-3e5e006fea26
	I0108 23:28:24.000182  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"807"},"items":[{"metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"781","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0108 23:28:24.001324  423858 kubeadm.go:787] kubelet initialised
	I0108 23:28:24.001351  423858 kubeadm.go:788] duration metric: took 4.825277ms waiting for restarted kubelet to initialise ...
	I0108 23:28:24.001395  423858 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:28:24.001485  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:24.001495  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.001502  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.001510  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.004644  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:24.004659  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.004665  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.004672  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.004679  423858 round_trippers.go:580]     Audit-Id: fb130cd4-7e6c-4e4e-88e8-58b9e49bda7b
	I0108 23:28:24.004696  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.004708  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.004716  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.005831  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"807"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82606 chars]
	I0108 23:28:24.008625  423858 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.008704  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:24.008714  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.008721  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.008730  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.010514  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:24.010535  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.010546  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.010554  423858 round_trippers.go:580]     Audit-Id: a84289a8-e7b2-4376-8950-1f846111dbf8
	I0108 23:28:24.010560  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.010565  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.010570  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.010575  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.010781  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:24.011265  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.011279  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.011286  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.011293  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.012990  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:24.013007  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.013021  423858 round_trippers.go:580]     Audit-Id: b9602557-02e4-4428-b0da-09a1b6aba8e7
	I0108 23:28:24.013030  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.013040  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.013046  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.013052  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.013057  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.013203  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:24.013598  423858 pod_ready.go:97] node "multinode-266395" hosting pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.013625  423858 pod_ready.go:81] duration metric: took 4.972569ms waiting for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	E0108 23:28:24.013638  423858 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-266395" hosting pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.013647  423858 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.013708  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266395
	I0108 23:28:24.013719  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.013730  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.013739  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.016857  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:24.016878  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.016887  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.016898  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.016907  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.016915  423858 round_trippers.go:580]     Audit-Id: e989bfb6-a254-461b-ac65-c25deb42da40
	I0108 23:28:24.016923  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.016932  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.017113  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"781","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0108 23:28:24.017555  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.017572  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.017585  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.017591  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.019840  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.019855  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.019861  423858 round_trippers.go:580]     Audit-Id: da142f4a-e70c-4422-a7f8-338803d4c573
	I0108 23:28:24.019867  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.019874  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.019883  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.019892  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.019901  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.020103  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:24.020430  423858 pod_ready.go:97] node "multinode-266395" hosting pod "etcd-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.020451  423858 pod_ready.go:81] duration metric: took 6.790029ms waiting for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	E0108 23:28:24.020459  423858 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-266395" hosting pod "etcd-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.020476  423858 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.020553  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266395
	I0108 23:28:24.020564  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.020574  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.020583  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.022536  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:24.022555  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.022566  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.022575  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.022585  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.022595  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.022615  423858 round_trippers.go:580]     Audit-Id: c5db56c2-7181-4fc6-9776-91ae55cd49c2
	I0108 23:28:24.022627  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.022807  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266395","namespace":"kube-system","uid":"70b0f39e-3999-4a5b-bae6-c08ae2adeb49","resourceVersion":"777","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.18:8443","kubernetes.io/config.hash":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.mirror":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.seen":"2024-01-08T23:17:58.693588503Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0108 23:28:24.023310  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.023334  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.023343  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.023351  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.025020  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:24.025084  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.025098  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.025107  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.025120  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.025128  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.025135  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.025140  423858 round_trippers.go:580]     Audit-Id: 42f0ed23-6574-4f0a-a45b-48251a6120cc
	I0108 23:28:24.025304  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:24.025658  423858 pod_ready.go:97] node "multinode-266395" hosting pod "kube-apiserver-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.025678  423858 pod_ready.go:81] duration metric: took 5.190448ms waiting for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	E0108 23:28:24.025685  423858 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-266395" hosting pod "kube-apiserver-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.025693  423858 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.025769  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266395
	I0108 23:28:24.025778  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.025784  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.025790  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.027922  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.027941  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.027950  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.027958  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.027968  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.027979  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.027987  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:23 GMT
	I0108 23:28:24.027997  423858 round_trippers.go:580]     Audit-Id: c5257d79-1ec0-4832-8ee1-97401b2d4f7f
	I0108 23:28:24.028314  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266395","namespace":"kube-system","uid":"32b7c02b-f69c-46ac-ab67-d61a4077b5b2","resourceVersion":"782","creationTimestamp":"2024-01-08T23:17:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.mirror":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.seen":"2024-01-08T23:17:49.571485221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0108 23:28:24.140838  423858 request.go:629] Waited for 112.147711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.140927  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.140933  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.140941  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.140948  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.143404  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.143424  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.143431  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.143437  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:24 GMT
	I0108 23:28:24.143444  423858 round_trippers.go:580]     Audit-Id: bfa08b77-b546-435c-8e8f-41c0cb352d6c
	I0108 23:28:24.143453  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.143464  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.143472  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.143694  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:24.144126  423858 pod_ready.go:97] node "multinode-266395" hosting pod "kube-controller-manager-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.144150  423858 pod_ready.go:81] duration metric: took 118.447507ms waiting for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	E0108 23:28:24.144159  423858 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-266395" hosting pod "kube-controller-manager-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.144172  423858 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.340628  423858 request.go:629] Waited for 196.378212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:28:24.340709  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:28:24.340716  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.340726  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.340747  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.343668  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.343697  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.343708  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.343717  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.343725  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:24 GMT
	I0108 23:28:24.343733  423858 round_trippers.go:580]     Audit-Id: 8d44af11-cb87-4350-b2f2-157b8d22dec4
	I0108 23:28:24.343745  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.343753  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.344185  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lvmgf","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c37677d-6832-4d6b-8f29-c23d25347535","resourceVersion":"796","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0108 23:28:24.541042  423858 request.go:629] Waited for 196.410857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.541109  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:24.541113  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.541128  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.541140  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.543729  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.543749  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.543756  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.543762  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:24 GMT
	I0108 23:28:24.543767  423858 round_trippers.go:580]     Audit-Id: 3845d431-b385-4735-a78b-b1e305d85c7d
	I0108 23:28:24.543772  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.543777  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.543782  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.543977  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:24.544351  423858 pod_ready.go:97] node "multinode-266395" hosting pod "kube-proxy-lvmgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.544371  423858 pod_ready.go:81] duration metric: took 400.191469ms waiting for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	E0108 23:28:24.544380  423858 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-266395" hosting pod "kube-proxy-lvmgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:24.544391  423858 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.740308  423858 request.go:629] Waited for 195.81551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:28:24.740404  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:28:24.740411  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.740423  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.740433  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.742973  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.742991  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.742998  423858 round_trippers.go:580]     Audit-Id: 898128f6-8327-46a3-92bf-6bbd572ffd60
	I0108 23:28:24.743003  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.743008  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.743013  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.743018  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.743023  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:24 GMT
	I0108 23:28:24.743234  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v4q5n","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ef0ea4c-f518-4179-9c48-4e1628a9752b","resourceVersion":"487","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 23:28:24.941090  423858 request.go:629] Waited for 197.367341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:28:24.941177  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:28:24.941194  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:24.941208  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:24.941224  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:24.944046  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:24.944065  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:24.944072  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:24.944077  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:24.944082  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:24.944088  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:24.944093  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:24 GMT
	I0108 23:28:24.944104  423858 round_trippers.go:580]     Audit-Id: 100b69e8-f66d-497a-b242-fb175e35b83b
	I0108 23:28:24.944276  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"724","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_20_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0108 23:28:24.944661  423858 pod_ready.go:92] pod "kube-proxy-v4q5n" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:24.944681  423858 pod_ready.go:81] duration metric: took 400.280663ms waiting for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:24.944692  423858 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:25.140718  423858 request.go:629] Waited for 195.953676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:28:25.140814  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:28:25.140823  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:25.140843  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:25.140856  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:25.143289  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:25.143308  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:25.143315  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:25.143321  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:25.143328  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:25.143337  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:25 GMT
	I0108 23:28:25.143345  423858 round_trippers.go:580]     Audit-Id: d35c9abf-96f2-458a-9c7e-fed3dd5872d4
	I0108 23:28:25.143354  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:25.143515  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vbq4b","generateName":"kube-proxy-","namespace":"kube-system","uid":"f4b0965a-b7bc-4a1a-8fc2-1397277c3710","resourceVersion":"694","creationTimestamp":"2024-01-08T23:19:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:19:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 23:28:25.340353  423858 request.go:629] Waited for 196.287789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:28:25.340431  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:28:25.340436  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:25.340446  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:25.340454  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:25.343123  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:25.343144  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:25.343152  423858 round_trippers.go:580]     Audit-Id: c98e4bf7-8228-4661-9446-9c2bc093eaf7
	I0108 23:28:25.343159  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:25.343167  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:25.343175  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:25.343184  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:25.343193  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:25 GMT
	I0108 23:28:25.343288  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m03","uid":"9520eb58-7ccf-441c-a72a-288c0fd8fc84","resourceVersion":"807","creationTimestamp":"2024-01-08T23:20:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_20_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:20:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0108 23:28:25.343695  423858 pod_ready.go:92] pod "kube-proxy-vbq4b" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:25.343718  423858 pod_ready.go:81] duration metric: took 399.018105ms waiting for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:25.343729  423858 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:25.540681  423858 request.go:629] Waited for 196.856482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:28:25.540751  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:28:25.540756  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:25.540763  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:25.540776  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:25.543589  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:25.543614  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:25.543629  423858 round_trippers.go:580]     Audit-Id: 8c53cbd3-0208-484c-8f36-40a68f5eb969
	I0108 23:28:25.543635  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:25.543640  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:25.543645  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:25.543653  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:25.543662  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:25 GMT
	I0108 23:28:25.543830  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266395","namespace":"kube-system","uid":"df5e2822-435f-4264-854b-929b6acccd99","resourceVersion":"779","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.mirror":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.seen":"2024-01-08T23:17:58.693594221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0108 23:28:25.740553  423858 request.go:629] Waited for 196.311397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:25.740640  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:25.740649  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:25.740662  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:25.740687  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:25.742838  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:25.742865  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:25.742876  423858 round_trippers.go:580]     Audit-Id: 77cb3144-eeda-4168-971a-3d6e9addd0b1
	I0108 23:28:25.742885  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:25.742893  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:25.742902  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:25.742911  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:25.742924  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:25 GMT
	I0108 23:28:25.743152  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:25.743619  423858 pod_ready.go:97] node "multinode-266395" hosting pod "kube-scheduler-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:25.743668  423858 pod_ready.go:81] duration metric: took 399.93058ms waiting for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	E0108 23:28:25.743682  423858 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-266395" hosting pod "kube-scheduler-multinode-266395" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-266395" has status "Ready":"False"
	I0108 23:28:25.743698  423858 pod_ready.go:38] duration metric: took 1.7422875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:28:25.743726  423858 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 23:28:25.756423  423858 command_runner.go:130] > -16
	I0108 23:28:25.756810  423858 ops.go:34] apiserver oom_adj: -16
	I0108 23:28:25.756829  423858 kubeadm.go:640] restartCluster took 21.642493903s
	I0108 23:28:25.756840  423858 kubeadm.go:406] StartCluster complete in 21.691427852s
	I0108 23:28:25.756862  423858 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:28:25.756957  423858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:28:25.757755  423858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:28:25.757991  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 23:28:25.758122  423858 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 23:28:25.760856  423858 out.go:177] * Enabled addons: 
	I0108 23:28:25.758318  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:28:25.758363  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:28:25.762098  423858 addons.go:508] enable addons completed in 3.972395ms: enabled=[]
	I0108 23:28:25.762512  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:28:25.763014  423858 round_trippers.go:463] GET https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:28:25.763029  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:25.763047  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:25.763066  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:25.765688  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:25.765702  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:25.765708  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:25.765714  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:25.765727  423858 round_trippers.go:580]     Content-Length: 291
	I0108 23:28:25.765735  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:25 GMT
	I0108 23:28:25.765743  423858 round_trippers.go:580]     Audit-Id: f814ff74-f33a-4652-a0d3-1cfd6b9942c9
	I0108 23:28:25.765751  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:25.765762  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:25.765821  423858 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"805","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 23:28:25.765981  423858 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266395" context rescaled to 1 replicas
	I0108 23:28:25.766010  423858 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:28:25.767469  423858 out.go:177] * Verifying Kubernetes components...
	I0108 23:28:25.768784  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:28:25.879663  423858 command_runner.go:130] > apiVersion: v1
	I0108 23:28:25.879686  423858 command_runner.go:130] > data:
	I0108 23:28:25.879690  423858 command_runner.go:130] >   Corefile: |
	I0108 23:28:25.879695  423858 command_runner.go:130] >     .:53 {
	I0108 23:28:25.879698  423858 command_runner.go:130] >         log
	I0108 23:28:25.879703  423858 command_runner.go:130] >         errors
	I0108 23:28:25.879707  423858 command_runner.go:130] >         health {
	I0108 23:28:25.879712  423858 command_runner.go:130] >            lameduck 5s
	I0108 23:28:25.879724  423858 command_runner.go:130] >         }
	I0108 23:28:25.879730  423858 command_runner.go:130] >         ready
	I0108 23:28:25.879735  423858 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 23:28:25.879740  423858 command_runner.go:130] >            pods insecure
	I0108 23:28:25.879746  423858 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 23:28:25.879752  423858 command_runner.go:130] >            ttl 30
	I0108 23:28:25.879757  423858 command_runner.go:130] >         }
	I0108 23:28:25.879761  423858 command_runner.go:130] >         prometheus :9153
	I0108 23:28:25.879765  423858 command_runner.go:130] >         hosts {
	I0108 23:28:25.879771  423858 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0108 23:28:25.879779  423858 command_runner.go:130] >            fallthrough
	I0108 23:28:25.879785  423858 command_runner.go:130] >         }
	I0108 23:28:25.879790  423858 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 23:28:25.879796  423858 command_runner.go:130] >            max_concurrent 1000
	I0108 23:28:25.879800  423858 command_runner.go:130] >         }
	I0108 23:28:25.879804  423858 command_runner.go:130] >         cache 30
	I0108 23:28:25.879809  423858 command_runner.go:130] >         loop
	I0108 23:28:25.879815  423858 command_runner.go:130] >         reload
	I0108 23:28:25.879819  423858 command_runner.go:130] >         loadbalance
	I0108 23:28:25.879825  423858 command_runner.go:130] >     }
	I0108 23:28:25.879829  423858 command_runner.go:130] > kind: ConfigMap
	I0108 23:28:25.879832  423858 command_runner.go:130] > metadata:
	I0108 23:28:25.879837  423858 command_runner.go:130] >   creationTimestamp: "2024-01-08T23:17:58Z"
	I0108 23:28:25.879841  423858 command_runner.go:130] >   name: coredns
	I0108 23:28:25.879845  423858 command_runner.go:130] >   namespace: kube-system
	I0108 23:28:25.879852  423858 command_runner.go:130] >   resourceVersion: "356"
	I0108 23:28:25.879857  423858 command_runner.go:130] >   uid: 46dcdfb1-d486-4d04-9672-97a7f8a58bba
	I0108 23:28:25.882222  423858 node_ready.go:35] waiting up to 6m0s for node "multinode-266395" to be "Ready" ...
	I0108 23:28:25.882417  423858 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 23:28:25.940519  423858 request.go:629] Waited for 58.182274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:25.940627  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:25.940638  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:25.940649  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:25.940657  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:25.943889  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:25.943914  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:25.943935  423858 round_trippers.go:580]     Audit-Id: d92f5546-a753-4754-9fd3-0d5de8c01285
	I0108 23:28:25.943953  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:25.943961  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:25.943966  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:25.943980  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:25.943988  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:25 GMT
	I0108 23:28:25.944349  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:26.383029  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:26.383058  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:26.383066  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:26.383072  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:26.385731  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:26.385755  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:26.385767  423858 round_trippers.go:580]     Audit-Id: 712cb061-52a5-4ae9-b22c-6871b656bb96
	I0108 23:28:26.385775  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:26.385783  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:26.385789  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:26.385797  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:26.385809  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:26 GMT
	I0108 23:28:26.386515  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:26.883336  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:26.883381  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:26.883390  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:26.883396  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:26.886473  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:26.886498  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:26.886509  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:26.886515  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:26.886520  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:26 GMT
	I0108 23:28:26.886526  423858 round_trippers.go:580]     Audit-Id: f37da08a-1c02-447c-8b7a-bd1de0f0959c
	I0108 23:28:26.886531  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:26.886536  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:26.887002  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:27.383271  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:27.383301  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:27.383313  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:27.383320  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:27.386529  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:27.386549  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:27.386556  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:27 GMT
	I0108 23:28:27.386567  423858 round_trippers.go:580]     Audit-Id: 99072c38-5839-4b34-b76c-9954eb861053
	I0108 23:28:27.386590  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:27.386604  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:27.386613  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:27.386621  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:27.387181  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:27.882873  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:27.882911  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:27.882923  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:27.882932  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:27.885928  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:27.885954  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:27.885961  423858 round_trippers.go:580]     Audit-Id: de6a66cf-7331-4988-993a-d6ff15c02f3b
	I0108 23:28:27.885967  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:27.885979  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:27.885987  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:27.885994  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:27.886006  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:27 GMT
	I0108 23:28:27.886290  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:27.886647  423858 node_ready.go:58] node "multinode-266395" has status "Ready":"False"
	I0108 23:28:28.382938  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:28.382962  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:28.382970  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:28.382976  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:28.387072  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:28.387101  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:28.387109  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:28.387115  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:28 GMT
	I0108 23:28:28.387120  423858 round_trippers.go:580]     Audit-Id: 8e3c3166-4bfe-42b5-b9d0-9c19cbe2f0f0
	I0108 23:28:28.387139  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:28.387147  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:28.387152  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:28.387472  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:28.883208  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:28.883235  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:28.883243  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:28.883249  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:28.886033  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:28.886067  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:28.886075  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:28.886087  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:28 GMT
	I0108 23:28:28.886093  423858 round_trippers.go:580]     Audit-Id: 33363e11-9337-42cc-8d90-99cdb2ce3321
	I0108 23:28:28.886106  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:28.886115  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:28.886127  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:28.886555  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:29.382724  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:29.382748  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:29.382758  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:29.382770  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:29.387511  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:29.387535  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:29.387545  423858 round_trippers.go:580]     Audit-Id: f4bc2513-efb6-4562-bb7d-2fb3bab33ec1
	I0108 23:28:29.387551  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:29.387559  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:29.387566  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:29.387575  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:29.387583  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:29 GMT
	I0108 23:28:29.388040  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:29.882721  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:29.882748  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:29.882768  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:29.882775  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:29.885678  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:29.885699  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:29.885710  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:29.885717  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:29.885724  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:29.885731  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:29.885740  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:29 GMT
	I0108 23:28:29.885752  423858 round_trippers.go:580]     Audit-Id: c042a138-70ad-4b6b-8944-4a8584ab88e4
	I0108 23:28:29.885979  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:30.382598  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:30.382627  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:30.382638  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:30.382645  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:30.387079  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:30.387112  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:30.387122  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:30.387130  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:30.387139  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:30.387162  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:30 GMT
	I0108 23:28:30.387176  423858 round_trippers.go:580]     Audit-Id: c46ab203-ba49-421c-a6db-30c6c055f5f3
	I0108 23:28:30.387188  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:30.387987  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:30.388329  423858 node_ready.go:58] node "multinode-266395" has status "Ready":"False"
	I0108 23:28:30.882631  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:30.882655  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:30.882663  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:30.882670  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:30.885863  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:30.885892  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:30.885902  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:30.885911  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:30.885929  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:30.885941  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:30.885954  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:30 GMT
	I0108 23:28:30.885966  423858 round_trippers.go:580]     Audit-Id: eb26e49e-e5e3-4fe5-909b-447331d038ce
	I0108 23:28:30.886439  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"725","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0108 23:28:31.383144  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:31.383169  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:31.383177  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:31.383183  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:31.386106  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:31.386128  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:31.386139  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:31 GMT
	I0108 23:28:31.386146  423858 round_trippers.go:580]     Audit-Id: 72c292aa-5e79-4585-b8a7-e108cc3e38bc
	I0108 23:28:31.386154  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:31.386162  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:31.386170  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:31.386180  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:31.386801  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:31.387157  423858 node_ready.go:49] node "multinode-266395" has status "Ready":"True"
	I0108 23:28:31.387178  423858 node_ready.go:38] duration metric: took 5.504926799s waiting for node "multinode-266395" to be "Ready" ...
	I0108 23:28:31.387189  423858 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:28:31.387252  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:31.387260  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:31.387271  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:31.387278  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:31.392057  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:31.392074  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:31.392087  423858 round_trippers.go:580]     Audit-Id: adc5d69f-a569-46c1-9bf0-2d7bac63427b
	I0108 23:28:31.392096  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:31.392104  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:31.392119  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:31.392129  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:31.392141  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:31 GMT
	I0108 23:28:31.394241  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"846"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82925 chars]
	I0108 23:28:31.396835  423858 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:31.396937  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:31.396948  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:31.396958  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:31.396968  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:31.399445  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:31.399459  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:31.399465  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:31.399470  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:31.399475  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:31.399480  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:31.399487  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:31 GMT
	I0108 23:28:31.399495  423858 round_trippers.go:580]     Audit-Id: fa536732-edb9-47a4-bf86-66d05b47e483
	I0108 23:28:31.399800  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:31.400339  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:31.400361  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:31.400372  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:31.400382  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:31.402656  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:31.402670  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:31.402676  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:31 GMT
	I0108 23:28:31.402681  423858 round_trippers.go:580]     Audit-Id: 26bbebf7-513d-4d6e-86a0-914f8346171d
	I0108 23:28:31.402685  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:31.402691  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:31.402696  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:31.402709  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:31.402863  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:31.897503  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:31.897531  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:31.897539  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:31.897545  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:31.900329  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:31.900355  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:31.900370  423858 round_trippers.go:580]     Audit-Id: 9ad0d5af-c56c-4be7-95af-0f37f0e1b90f
	I0108 23:28:31.900377  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:31.900385  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:31.900392  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:31.900401  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:31.900411  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:31 GMT
	I0108 23:28:31.900613  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:31.901127  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:31.901144  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:31.901154  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:31.901162  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:31.904443  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:31.904460  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:31.904468  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:31.904477  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:31.904485  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:31 GMT
	I0108 23:28:31.904500  423858 round_trippers.go:580]     Audit-Id: d93fd0c1-b9f6-4cb0-8a8b-c2d65935b747
	I0108 23:28:31.904509  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:31.904515  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:31.904658  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:32.397856  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:32.397881  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:32.397889  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:32.397896  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:32.400754  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:32.400777  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:32.400785  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:32.400790  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:32.400800  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:32 GMT
	I0108 23:28:32.400807  423858 round_trippers.go:580]     Audit-Id: d4cbe6bf-40fc-4c4f-aec9-92d9ac1fedfe
	I0108 23:28:32.400812  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:32.400818  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:32.401387  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:32.401856  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:32.401871  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:32.401878  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:32.401884  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:32.404103  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:32.404119  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:32.404125  423858 round_trippers.go:580]     Audit-Id: 82bc0cf3-0b99-445b-a22e-bec50ec5ba19
	I0108 23:28:32.404133  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:32.404142  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:32.404150  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:32.404159  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:32.404181  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:32 GMT
	I0108 23:28:32.404352  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:32.898014  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:32.898043  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:32.898055  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:32.898065  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:32.901736  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:32.901761  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:32.901772  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:32.901782  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:32.901791  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:32.901800  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:32.901808  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:32 GMT
	I0108 23:28:32.901817  423858 round_trippers.go:580]     Audit-Id: 3b858d6a-ab14-414b-b02f-06a657d21a94
	I0108 23:28:32.902011  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:32.902616  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:32.902636  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:32.902647  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:32.902657  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:32.905170  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:32.905194  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:32.905204  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:32.905212  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:32 GMT
	I0108 23:28:32.905221  423858 round_trippers.go:580]     Audit-Id: e9699229-3100-4d6a-971d-284ede1c3ecb
	I0108 23:28:32.905230  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:32.905241  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:32.905250  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:32.905430  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:33.397106  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:33.397141  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:33.397152  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:33.397160  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:33.400448  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:33.400472  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:33.400483  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:33.400492  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:33 GMT
	I0108 23:28:33.400502  423858 round_trippers.go:580]     Audit-Id: 74bdc828-99c8-461f-9897-2f814bd239b4
	I0108 23:28:33.400510  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:33.400519  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:33.400525  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:33.401012  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:33.401451  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:33.401466  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:33.401473  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:33.401479  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:33.404183  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:33.404201  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:33.404207  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:33.404213  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:33 GMT
	I0108 23:28:33.404218  423858 round_trippers.go:580]     Audit-Id: 928a1d4f-43ce-407a-a9cf-8153f11b0084
	I0108 23:28:33.404224  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:33.404232  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:33.404237  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:33.405819  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:33.406114  423858 pod_ready.go:102] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"False"
	I0108 23:28:33.897508  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:33.897533  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:33.897541  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:33.897548  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:33.905484  423858 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 23:28:33.905515  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:33.905523  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:33.905528  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:33.905533  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:33.905538  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:33 GMT
	I0108 23:28:33.905546  423858 round_trippers.go:580]     Audit-Id: 28c8fb60-54da-4afa-bed4-be2ca71a4434
	I0108 23:28:33.905551  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:33.905728  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:33.906173  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:33.906185  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:33.906195  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:33.906201  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:33.910504  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:33.910530  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:33.910541  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:33.910550  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:33 GMT
	I0108 23:28:33.910563  423858 round_trippers.go:580]     Audit-Id: 9622554a-fe8d-4f95-a363-30ab57cd26b3
	I0108 23:28:33.910569  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:33.910574  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:33.910579  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:33.910772  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:34.397764  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:34.397793  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:34.397805  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:34.397815  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:34.400504  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:34.400525  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:34.400533  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:34 GMT
	I0108 23:28:34.400538  423858 round_trippers.go:580]     Audit-Id: f19f93b4-3695-4ae4-8786-3c70a186b55c
	I0108 23:28:34.400543  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:34.400549  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:34.400557  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:34.400564  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:34.400952  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:34.401558  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:34.401574  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:34.401582  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:34.401587  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:34.403706  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:34.403722  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:34.403728  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:34.403734  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:34 GMT
	I0108 23:28:34.403739  423858 round_trippers.go:580]     Audit-Id: 82f767af-9744-4b15-be87-d50cb764720b
	I0108 23:28:34.403746  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:34.403754  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:34.403759  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:34.404094  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:34.897809  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:34.897840  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:34.897848  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:34.897854  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:34.900823  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:34.900849  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:34.900867  423858 round_trippers.go:580]     Audit-Id: 7afbbbc1-b5e6-4dfa-ab08-cfa8e1930cc3
	I0108 23:28:34.900876  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:34.900890  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:34.900897  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:34.900909  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:34.900916  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:34 GMT
	I0108 23:28:34.901119  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:34.901669  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:34.901687  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:34.901694  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:34.901699  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:34.904003  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:34.904023  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:34.904033  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:34.904040  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:34.904047  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:34 GMT
	I0108 23:28:34.904054  423858 round_trippers.go:580]     Audit-Id: e84f864a-fe8d-4bef-b9a0-1873f10b560f
	I0108 23:28:34.904063  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:34.904075  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:34.904360  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:35.398069  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:35.398103  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:35.398117  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:35.398125  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:35.401003  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:35.401025  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:35.401033  423858 round_trippers.go:580]     Audit-Id: b8576838-cd56-4330-ae2d-314b4c755180
	I0108 23:28:35.401038  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:35.401045  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:35.401054  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:35.401061  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:35.401070  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:35 GMT
	I0108 23:28:35.401271  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:35.401738  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:35.401755  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:35.401765  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:35.401774  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:35.403974  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:35.403992  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:35.404000  423858 round_trippers.go:580]     Audit-Id: d58ac032-8f6f-4530-a530-4bcbbe3a5aa5
	I0108 23:28:35.404008  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:35.404016  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:35.404029  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:35.404043  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:35.404065  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:35 GMT
	I0108 23:28:35.404283  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:35.898020  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:35.898047  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:35.898064  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:35.898072  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:35.900974  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:35.900999  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:35.901010  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:35.901020  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:35.901027  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:35 GMT
	I0108 23:28:35.901036  423858 round_trippers.go:580]     Audit-Id: 89cb093c-2cf3-4e53-894f-a8c93dc18e28
	I0108 23:28:35.901042  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:35.901048  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:35.901281  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:35.901787  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:35.901805  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:35.901816  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:35.901823  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:35.904147  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:35.904185  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:35.904202  423858 round_trippers.go:580]     Audit-Id: 28ec5421-18e3-4dd8-aa98-2b6d6100e81f
	I0108 23:28:35.904215  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:35.904227  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:35.904240  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:35.904252  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:35.904264  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:35 GMT
	I0108 23:28:35.904397  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:35.904802  423858 pod_ready.go:102] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"False"
	I0108 23:28:36.397118  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:36.397145  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:36.397159  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:36.397168  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:36.400262  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:36.400289  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:36.400309  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:36.400318  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:36.400323  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:36.400331  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:36.400342  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:36 GMT
	I0108 23:28:36.400354  423858 round_trippers.go:580]     Audit-Id: bd7f563e-2e51-4a39-b537-df2c681ee277
	I0108 23:28:36.400711  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:36.401221  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:36.401237  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:36.401245  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:36.401250  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:36.403664  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:36.403684  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:36.403692  423858 round_trippers.go:580]     Audit-Id: aac248f6-b406-4a37-bdb9-5526c99d349c
	I0108 23:28:36.403701  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:36.403709  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:36.403723  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:36.403735  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:36.403747  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:36 GMT
	I0108 23:28:36.403878  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:36.897405  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:36.897438  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:36.897451  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:36.897461  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:36.900780  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:36.900817  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:36.900829  423858 round_trippers.go:580]     Audit-Id: e4892678-98e2-4038-8791-9536722d3ab5
	I0108 23:28:36.900838  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:36.900847  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:36.900855  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:36.900868  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:36.900878  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:36 GMT
	I0108 23:28:36.901072  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:36.901542  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:36.901559  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:36.901569  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:36.901585  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:36.904049  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:36.904106  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:36.904117  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:36.904125  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:36 GMT
	I0108 23:28:36.904136  423858 round_trippers.go:580]     Audit-Id: bbfc4fb8-d475-450d-8576-a00d27dbe85b
	I0108 23:28:36.904145  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:36.904151  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:36.904158  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:36.904341  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:37.397614  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:37.397644  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:37.397657  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:37.397664  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:37.400904  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:37.400924  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:37.400930  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:37.400936  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:37.400941  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:37.400947  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:37.400952  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:37 GMT
	I0108 23:28:37.400957  423858 round_trippers.go:580]     Audit-Id: 5455ebc5-39f0-4181-a780-638a4cd6ed58
	I0108 23:28:37.401175  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:37.401891  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:37.401909  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:37.401921  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:37.401927  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:37.404141  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:37.404155  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:37.404162  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:37.404167  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:37.404178  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:37.404183  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:37 GMT
	I0108 23:28:37.404189  423858 round_trippers.go:580]     Audit-Id: e5e0ee0b-f722-4f36-9edd-36826db6cecf
	I0108 23:28:37.404194  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:37.404695  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:37.898082  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:37.898111  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:37.898120  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:37.898126  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:37.901254  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:37.901275  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:37.901285  423858 round_trippers.go:580]     Audit-Id: 0dfeb9fc-5ba7-47e6-b15d-9e1c7d6c5190
	I0108 23:28:37.901296  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:37.901304  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:37.901312  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:37.901321  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:37.901338  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:37 GMT
	I0108 23:28:37.901472  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:37.902001  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:37.902019  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:37.902029  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:37.902037  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:37.904896  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:37.904911  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:37.904918  423858 round_trippers.go:580]     Audit-Id: e4ac9f4b-2ddd-4b16-8caf-b87f4619122a
	I0108 23:28:37.904923  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:37.904928  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:37.904933  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:37.904952  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:37.904965  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:37 GMT
	I0108 23:28:37.905318  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:37.905620  423858 pod_ready.go:102] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"False"
	I0108 23:28:38.398062  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:38.398094  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:38.398106  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:38.398116  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:38.400700  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:38.400722  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:38.400729  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:38.400735  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:38.400743  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:38.400752  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:38 GMT
	I0108 23:28:38.400760  423858 round_trippers.go:580]     Audit-Id: 209c6e48-57eb-4635-b7b4-4819135afeef
	I0108 23:28:38.400768  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:38.400965  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"786","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0108 23:28:38.401404  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:38.401419  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:38.401426  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:38.401432  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:38.403895  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:38.403912  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:38.403918  423858 round_trippers.go:580]     Audit-Id: 342a2696-6b7a-4a47-a9cb-616a80cfd451
	I0108 23:28:38.403926  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:38.403938  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:38.403945  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:38.403953  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:38.403969  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:38 GMT
	I0108 23:28:38.404072  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:38.897368  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:38.897393  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:38.897401  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:38.897407  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:38.901401  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:38.901433  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:38.901448  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:38.901457  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:38.901464  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:38.901469  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:38 GMT
	I0108 23:28:38.901474  423858 round_trippers.go:580]     Audit-Id: 6fec7e8d-0b91-4b40-9827-9548bc6fb5b4
	I0108 23:28:38.901479  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:38.902481  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"871","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0108 23:28:38.902932  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:38.902945  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:38.902952  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:38.902957  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:38.905812  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:38.905835  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:38.905844  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:38 GMT
	I0108 23:28:38.905852  423858 round_trippers.go:580]     Audit-Id: 8eb4f1a9-df2a-42f6-9b00-eba1ad66839d
	I0108 23:28:38.905859  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:38.905866  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:38.905873  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:38.905882  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:38.906375  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.397192  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:39.397216  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.397224  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.397231  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.400418  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:39.400446  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.400455  423858 round_trippers.go:580]     Audit-Id: 70e9f156-68d5-478a-b148-4793495f90a7
	I0108 23:28:39.400463  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.400481  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.400491  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.400499  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.400510  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.400757  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"871","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I0108 23:28:39.401361  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:39.401382  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.401392  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.401408  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.404012  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:39.404039  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.404050  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.404062  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.404073  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.404084  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.404096  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.404107  423858 round_trippers.go:580]     Audit-Id: 7683459b-b48d-4266-9b55-d85a69f4f94c
	I0108 23:28:39.404292  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.898040  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:28:39.898068  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.898076  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.898088  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.901632  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:39.901663  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.901673  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.901679  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.901687  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.901695  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.901708  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.901720  423858 round_trippers.go:580]     Audit-Id: 81f0f03e-7251-48e0-ba4a-05a3a501f45e
	I0108 23:28:39.902419  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0108 23:28:39.902896  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:39.902910  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.902916  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.902922  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.905424  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:39.905451  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.905461  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.905470  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.905478  423858 round_trippers.go:580]     Audit-Id: af56e866-77ec-4204-9927-bca1a642e60b
	I0108 23:28:39.905487  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.905497  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.905508  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.905742  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.906149  423858 pod_ready.go:92] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:39.906170  423858 pod_ready.go:81] duration metric: took 8.509310477s waiting for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.906179  423858 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.906233  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266395
	I0108 23:28:39.906241  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.906247  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.906253  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.908482  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:39.908504  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.908514  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.908522  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.908530  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.908538  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.908551  423858 round_trippers.go:580]     Audit-Id: d5bf82ac-647f-4653-9196-55966a0579bc
	I0108 23:28:39.908559  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.908709  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"865","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0108 23:28:39.909171  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:39.909190  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.909198  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.909205  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.911179  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:39.911195  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.911202  423858 round_trippers.go:580]     Audit-Id: 17f2c008-e935-41b9-990b-cfdaf4bd4aa9
	I0108 23:28:39.911208  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.911213  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.911218  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.911224  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.911229  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.911407  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.911754  423858 pod_ready.go:92] pod "etcd-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:39.911780  423858 pod_ready.go:81] duration metric: took 5.592394ms waiting for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.911802  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.911866  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266395
	I0108 23:28:39.911877  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.911887  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.911897  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.913761  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:39.913785  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.913795  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.913802  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.913808  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.913814  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.913820  423858 round_trippers.go:580]     Audit-Id: c7999055-ec9c-4405-9286-0822a3c632de
	I0108 23:28:39.913826  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.914176  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266395","namespace":"kube-system","uid":"70b0f39e-3999-4a5b-bae6-c08ae2adeb49","resourceVersion":"860","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.18:8443","kubernetes.io/config.hash":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.mirror":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.seen":"2024-01-08T23:17:58.693588503Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0108 23:28:39.914572  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:39.914585  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.914592  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.914598  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.917065  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:39.917077  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.917083  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.917089  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.917094  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.917101  423858 round_trippers.go:580]     Audit-Id: 218b5947-a240-428f-b88a-b8fdcce671a8
	I0108 23:28:39.917110  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.917118  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.917536  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.917892  423858 pod_ready.go:92] pod "kube-apiserver-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:39.917909  423858 pod_ready.go:81] duration metric: took 6.09683ms waiting for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.917922  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.917978  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266395
	I0108 23:28:39.917988  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.917998  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.918008  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.927189  423858 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0108 23:28:39.927209  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.927219  423858 round_trippers.go:580]     Audit-Id: 6611ecae-4569-415c-9c0c-83c22552daa2
	I0108 23:28:39.927229  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.927236  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.927247  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.927255  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.927268  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.927449  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266395","namespace":"kube-system","uid":"32b7c02b-f69c-46ac-ab67-d61a4077b5b2","resourceVersion":"850","creationTimestamp":"2024-01-08T23:17:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.mirror":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.seen":"2024-01-08T23:17:49.571485221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0108 23:28:39.927815  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:39.927830  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.927837  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.927842  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.930741  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:39.930769  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.930776  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.930783  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.930790  423858 round_trippers.go:580]     Audit-Id: bf2c467f-dd80-4737-92d3-de5fee2b0c4f
	I0108 23:28:39.930796  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.930804  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.930809  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.931172  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.931479  423858 pod_ready.go:92] pod "kube-controller-manager-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:39.931495  423858 pod_ready.go:81] duration metric: took 13.566442ms waiting for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.931505  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.931553  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:28:39.931561  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.931567  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.931572  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.934445  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:39.934459  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.934465  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.934471  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.934480  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.934493  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.934502  423858 round_trippers.go:580]     Audit-Id: 8afca954-0c50-41ea-a588-e37ab785eaef
	I0108 23:28:39.934512  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.935150  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lvmgf","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c37677d-6832-4d6b-8f29-c23d25347535","resourceVersion":"796","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0108 23:28:39.935506  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:39.935518  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:39.935525  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:39.935552  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:39.937522  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:39.937544  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:39.937553  423858 round_trippers.go:580]     Audit-Id: aa641bc6-a051-4e73-98e0-3f9184b49a13
	I0108 23:28:39.937564  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:39.937580  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:39.937587  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:39.937598  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:39.937614  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:39 GMT
	I0108 23:28:39.937765  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:39.938111  423858 pod_ready.go:92] pod "kube-proxy-lvmgf" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:39.938131  423858 pod_ready.go:81] duration metric: took 6.619828ms waiting for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:39.938143  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:40.098532  423858 request.go:629] Waited for 160.317191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:28:40.098600  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:28:40.098605  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:40.098612  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:40.098620  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:40.101742  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:40.101765  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:40.101775  423858 round_trippers.go:580]     Audit-Id: a0654e81-b13e-4a4e-85cb-bed9c2375542
	I0108 23:28:40.101784  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:40.101792  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:40.101801  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:40.101813  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:40.101822  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:40 GMT
	I0108 23:28:40.102601  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v4q5n","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ef0ea4c-f518-4179-9c48-4e1628a9752b","resourceVersion":"487","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 23:28:40.298363  423858 request.go:629] Waited for 195.353207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:28:40.298440  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:28:40.298445  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:40.298452  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:40.298459  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:40.301452  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:40.301476  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:40.301485  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:40.301491  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:40 GMT
	I0108 23:28:40.301496  423858 round_trippers.go:580]     Audit-Id: 9a675c2d-800c-457c-88fe-866c82e0e2f8
	I0108 23:28:40.301501  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:40.301506  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:40.301512  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:40.301632  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"634208e7-068a-4df5-978c-942779812c38","resourceVersion":"724","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_20_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4236 chars]
	I0108 23:28:40.301974  423858 pod_ready.go:92] pod "kube-proxy-v4q5n" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:40.301995  423858 pod_ready.go:81] duration metric: took 363.838557ms waiting for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:40.302009  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:40.498225  423858 request.go:629] Waited for 196.140314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:28:40.498300  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:28:40.498305  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:40.498313  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:40.498320  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:40.501243  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:40.501268  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:40.501278  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:40.501285  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:40.501291  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:40.501296  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:40 GMT
	I0108 23:28:40.501302  423858 round_trippers.go:580]     Audit-Id: 46cd0c02-3f69-4b68-9055-4702ad08fe1d
	I0108 23:28:40.501307  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:40.501515  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vbq4b","generateName":"kube-proxy-","namespace":"kube-system","uid":"f4b0965a-b7bc-4a1a-8fc2-1397277c3710","resourceVersion":"694","creationTimestamp":"2024-01-08T23:19:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:19:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 23:28:40.698488  423858 request.go:629] Waited for 196.365232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:28:40.698566  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:28:40.698573  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:40.698580  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:40.698590  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:40.701202  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:40.701228  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:40.701259  423858 round_trippers.go:580]     Audit-Id: af43c8d3-371c-4608-98dc-a074e34e80bd
	I0108 23:28:40.701271  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:40.701278  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:40.701285  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:40.701292  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:40.701299  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:40 GMT
	I0108 23:28:40.701492  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m03","uid":"9520eb58-7ccf-441c-a72a-288c0fd8fc84","resourceVersion":"807","creationTimestamp":"2024-01-08T23:20:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_20_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:20:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0108 23:28:40.701796  423858 pod_ready.go:92] pod "kube-proxy-vbq4b" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:40.701815  423858 pod_ready.go:81] duration metric: took 399.799028ms waiting for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:40.701823  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:40.898972  423858 request.go:629] Waited for 197.047693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:28:40.899067  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:28:40.899079  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:40.899089  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:40.899102  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:40.902555  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:40.902576  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:40.902583  423858 round_trippers.go:580]     Audit-Id: 7bf24310-c416-4b13-8773-0d00f35f7f9c
	I0108 23:28:40.902589  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:40.902594  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:40.902602  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:40.902607  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:40.902612  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:40 GMT
	I0108 23:28:40.902954  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266395","namespace":"kube-system","uid":"df5e2822-435f-4264-854b-929b6acccd99","resourceVersion":"847","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.mirror":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.seen":"2024-01-08T23:17:58.693594221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0108 23:28:41.098854  423858 request.go:629] Waited for 195.388676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:41.098930  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:28:41.098936  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:41.098957  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:41.098966  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:41.102003  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:41.102025  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:41.102035  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:41 GMT
	I0108 23:28:41.102044  423858 round_trippers.go:580]     Audit-Id: 3ac9a65d-0d7b-4640-835b-d9c8f4615907
	I0108 23:28:41.102050  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:41.102059  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:41.102064  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:41.102072  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:41.102464  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0108 23:28:41.102804  423858 pod_ready.go:92] pod "kube-scheduler-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:28:41.102823  423858 pod_ready.go:81] duration metric: took 400.993731ms waiting for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:28:41.102835  423858 pod_ready.go:38] duration metric: took 9.71562992s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:28:41.102871  423858 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:28:41.102932  423858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:28:41.121078  423858 command_runner.go:130] > 1072
	I0108 23:28:41.121118  423858 api_server.go:72] duration metric: took 15.35507511s to wait for apiserver process to appear ...
	I0108 23:28:41.121129  423858 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:28:41.121151  423858 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:28:41.126420  423858 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0108 23:28:41.126497  423858 round_trippers.go:463] GET https://192.168.39.18:8443/version
	I0108 23:28:41.126508  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:41.126516  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:41.126523  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:41.127640  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:28:41.127662  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:41.127673  423858 round_trippers.go:580]     Audit-Id: 8e325644-fa40-425e-ba4a-d620bc706761
	I0108 23:28:41.127682  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:41.127699  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:41.127709  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:41.127720  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:41.127729  423858 round_trippers.go:580]     Content-Length: 264
	I0108 23:28:41.127735  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:41 GMT
	I0108 23:28:41.127752  423858 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 23:28:41.127794  423858 api_server.go:141] control plane version: v1.28.4
	I0108 23:28:41.127809  423858 api_server.go:131] duration metric: took 6.674176ms to wait for apiserver health ...
	I0108 23:28:41.127817  423858 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:28:41.298148  423858 request.go:629] Waited for 170.26269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:41.298225  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:41.298232  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:41.298242  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:41.298252  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:41.303019  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:41.303043  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:41.303052  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:41.303061  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:41 GMT
	I0108 23:28:41.303068  423858 round_trippers.go:580]     Audit-Id: df190764-3a37-4378-8a95-3dbe14624875
	I0108 23:28:41.303075  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:41.303082  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:41.303090  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:41.304835  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"886"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81846 chars]
	I0108 23:28:41.307276  423858 system_pods.go:59] 12 kube-system pods found
	I0108 23:28:41.307300  423858 system_pods.go:61] "coredns-5dd5756b68-r8pvw" [5300c187-4f1f-4330-ae19-6bf2855763f2] Running
	I0108 23:28:41.307304  423858 system_pods.go:61] "etcd-multinode-266395" [ad57572e-a901-4042-b907-d0738c803dbd] Running
	I0108 23:28:41.307311  423858 system_pods.go:61] "kindnet-brbnm" [202f1355-7d13-4a76-bf54-82139d5c527a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 23:28:41.307318  423858 system_pods.go:61] "kindnet-fcjt6" [676370cd-926b-4102-b249-df808216c915] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 23:28:41.307324  423858 system_pods.go:61] "kindnet-mnltq" [c65752e0-cd30-49cf-9645-5befeecc3d34] Running
	I0108 23:28:41.307329  423858 system_pods.go:61] "kube-apiserver-multinode-266395" [70b0f39e-3999-4a5b-bae6-c08ae2adeb49] Running
	I0108 23:28:41.307336  423858 system_pods.go:61] "kube-controller-manager-multinode-266395" [32b7c02b-f69c-46ac-ab67-d61a4077b5b2] Running
	I0108 23:28:41.307340  423858 system_pods.go:61] "kube-proxy-lvmgf" [9c37677d-6832-4d6b-8f29-c23d25347535] Running
	I0108 23:28:41.307344  423858 system_pods.go:61] "kube-proxy-v4q5n" [8ef0ea4c-f518-4179-9c48-4e1628a9752b] Running
	I0108 23:28:41.307352  423858 system_pods.go:61] "kube-proxy-vbq4b" [f4b0965a-b7bc-4a1a-8fc2-1397277c3710] Running
	I0108 23:28:41.307383  423858 system_pods.go:61] "kube-scheduler-multinode-266395" [df5e2822-435f-4264-854b-929b6acccd99] Running
	I0108 23:28:41.307390  423858 system_pods.go:61] "storage-provisioner" [f15dcd0d-59b5-4f16-94c7-425f162c60ad] Running
	I0108 23:28:41.307402  423858 system_pods.go:74] duration metric: took 179.578603ms to wait for pod list to return data ...
	I0108 23:28:41.307412  423858 default_sa.go:34] waiting for default service account to be created ...
	I0108 23:28:41.498866  423858 request.go:629] Waited for 191.361522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:28:41.498950  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/default/serviceaccounts
	I0108 23:28:41.498957  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:41.498970  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:41.498980  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:41.501783  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:28:41.501802  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:41.501809  423858 round_trippers.go:580]     Audit-Id: 9410dd6e-73f5-456e-a5ff-01741a4abc7d
	I0108 23:28:41.501814  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:41.501820  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:41.501825  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:41.501830  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:41.501835  423858 round_trippers.go:580]     Content-Length: 261
	I0108 23:28:41.501844  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:41 GMT
	I0108 23:28:41.501866  423858 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"887"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fef9e48e-c368-4659-9859-1571562fbbc8","resourceVersion":"295","creationTimestamp":"2024-01-08T23:18:10Z"}}]}
	I0108 23:28:41.502068  423858 default_sa.go:45] found service account: "default"
	I0108 23:28:41.502086  423858 default_sa.go:55] duration metric: took 194.668367ms for default service account to be created ...
	I0108 23:28:41.502095  423858 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 23:28:41.698559  423858 request.go:629] Waited for 196.39667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:41.698636  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:28:41.698640  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:41.698649  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:41.698656  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:41.703238  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:28:41.703276  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:41.703284  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:41.703289  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:41.703296  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:41.703306  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:41.703314  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:41 GMT
	I0108 23:28:41.703324  423858 round_trippers.go:580]     Audit-Id: 53e320e5-8bcb-4eef-abc1-4bb1e25df915
	I0108 23:28:41.704979  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"887"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81846 chars]
	I0108 23:28:41.707408  423858 system_pods.go:86] 12 kube-system pods found
	I0108 23:28:41.707429  423858 system_pods.go:89] "coredns-5dd5756b68-r8pvw" [5300c187-4f1f-4330-ae19-6bf2855763f2] Running
	I0108 23:28:41.707434  423858 system_pods.go:89] "etcd-multinode-266395" [ad57572e-a901-4042-b907-d0738c803dbd] Running
	I0108 23:28:41.707441  423858 system_pods.go:89] "kindnet-brbnm" [202f1355-7d13-4a76-bf54-82139d5c527a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 23:28:41.707449  423858 system_pods.go:89] "kindnet-fcjt6" [676370cd-926b-4102-b249-df808216c915] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 23:28:41.707454  423858 system_pods.go:89] "kindnet-mnltq" [c65752e0-cd30-49cf-9645-5befeecc3d34] Running
	I0108 23:28:41.707459  423858 system_pods.go:89] "kube-apiserver-multinode-266395" [70b0f39e-3999-4a5b-bae6-c08ae2adeb49] Running
	I0108 23:28:41.707463  423858 system_pods.go:89] "kube-controller-manager-multinode-266395" [32b7c02b-f69c-46ac-ab67-d61a4077b5b2] Running
	I0108 23:28:41.707470  423858 system_pods.go:89] "kube-proxy-lvmgf" [9c37677d-6832-4d6b-8f29-c23d25347535] Running
	I0108 23:28:41.707477  423858 system_pods.go:89] "kube-proxy-v4q5n" [8ef0ea4c-f518-4179-9c48-4e1628a9752b] Running
	I0108 23:28:41.707482  423858 system_pods.go:89] "kube-proxy-vbq4b" [f4b0965a-b7bc-4a1a-8fc2-1397277c3710] Running
	I0108 23:28:41.707486  423858 system_pods.go:89] "kube-scheduler-multinode-266395" [df5e2822-435f-4264-854b-929b6acccd99] Running
	I0108 23:28:41.707490  423858 system_pods.go:89] "storage-provisioner" [f15dcd0d-59b5-4f16-94c7-425f162c60ad] Running
	I0108 23:28:41.707497  423858 system_pods.go:126] duration metric: took 205.397095ms to wait for k8s-apps to be running ...
	I0108 23:28:41.707508  423858 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:28:41.707551  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:28:41.722101  423858 system_svc.go:56] duration metric: took 14.580897ms WaitForService to wait for kubelet.
	I0108 23:28:41.722134  423858 kubeadm.go:581] duration metric: took 15.956089002s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:28:41.722156  423858 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:28:41.898575  423858 request.go:629] Waited for 176.320545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0108 23:28:41.898645  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0108 23:28:41.898649  423858 round_trippers.go:469] Request Headers:
	I0108 23:28:41.898657  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:28:41.898664  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:28:41.902204  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:28:41.902231  423858 round_trippers.go:577] Response Headers:
	I0108 23:28:41.902251  423858 round_trippers.go:580]     Audit-Id: b1fac098-1325-4f9c-b43f-fd48ae83f863
	I0108 23:28:41.902259  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:28:41.902267  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:28:41.902275  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:28:41.902283  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:28:41.902291  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:28:41 GMT
	I0108 23:28:41.902687  423858 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"887"},"items":[{"metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"846","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0108 23:28:41.903267  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:28:41.903289  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:28:41.903304  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:28:41.903309  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:28:41.903316  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:28:41.903325  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:28:41.903333  423858 node_conditions.go:105] duration metric: took 181.171796ms to run NodePressure ...
	I0108 23:28:41.903353  423858 start.go:228] waiting for startup goroutines ...
	I0108 23:28:41.903377  423858 start.go:233] waiting for cluster config update ...
	I0108 23:28:41.903388  423858 start.go:242] writing updated cluster config ...
	I0108 23:28:41.903838  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:28:41.903942  423858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:28:41.907298  423858 out.go:177] * Starting worker node multinode-266395-m02 in cluster multinode-266395
	I0108 23:28:41.908491  423858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:28:41.908516  423858 cache.go:56] Caching tarball of preloaded images
	I0108 23:28:41.908615  423858 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:28:41.908629  423858 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:28:41.908731  423858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:28:41.908933  423858 start.go:365] acquiring machines lock for multinode-266395-m02: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:28:41.908999  423858 start.go:369] acquired machines lock for "multinode-266395-m02" in 45.362µs
	I0108 23:28:41.909022  423858 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:28:41.909029  423858 fix.go:54] fixHost starting: m02
	I0108 23:28:41.909385  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:28:41.909421  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:28:41.924328  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42363
	I0108 23:28:41.924856  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:28:41.925371  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:28:41.925395  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:28:41.925755  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:28:41.925923  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:28:41.926081  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetState
	I0108 23:28:41.927690  423858 fix.go:102] recreateIfNeeded on multinode-266395-m02: state=Running err=<nil>
	W0108 23:28:41.927706  423858 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:28:41.930804  423858 out.go:177] * Updating the running kvm2 "multinode-266395-m02" VM ...
	I0108 23:28:41.932152  423858 machine.go:88] provisioning docker machine ...
	I0108 23:28:41.932175  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:28:41.932409  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:28:41.932581  423858 buildroot.go:166] provisioning hostname "multinode-266395-m02"
	I0108 23:28:41.932601  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:28:41.932749  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:28:41.935475  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:41.935894  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:28:41.935925  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:41.936067  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:28:41.936251  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:41.936387  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:41.936527  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:28:41.936652  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:28:41.936994  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:28:41.937010  423858 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266395-m02 && echo "multinode-266395-m02" | sudo tee /etc/hostname
	I0108 23:28:42.074690  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266395-m02
	
	I0108 23:28:42.074728  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:28:42.077505  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.077828  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:28:42.077859  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.077991  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:28:42.078185  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:42.078356  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:42.078502  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:28:42.078685  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:28:42.079075  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:28:42.079094  423858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266395-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266395-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266395-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:28:42.200566  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:28:42.200610  423858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:28:42.200630  423858 buildroot.go:174] setting up certificates
	I0108 23:28:42.200644  423858 provision.go:83] configureAuth start
	I0108 23:28:42.200665  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetMachineName
	I0108 23:28:42.200958  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:28:42.203777  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.204247  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:28:42.204281  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.204392  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:28:42.206630  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.206987  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:28:42.207009  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.207170  423858 provision.go:138] copyHostCerts
	I0108 23:28:42.207208  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:28:42.207248  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:28:42.207261  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:28:42.207342  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:28:42.207454  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:28:42.207481  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:28:42.207489  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:28:42.207534  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:28:42.207603  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:28:42.207626  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:28:42.207633  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:28:42.207669  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:28:42.207734  423858 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.multinode-266395-m02 san=[192.168.39.214 192.168.39.214 localhost 127.0.0.1 minikube multinode-266395-m02]
	I0108 23:28:42.309580  423858 provision.go:172] copyRemoteCerts
	I0108 23:28:42.309656  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:28:42.309690  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:28:42.312748  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.313166  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:28:42.313195  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.313403  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:28:42.313588  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:42.313744  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:28:42.313847  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:28:42.404433  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:28:42.404526  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:28:42.428643  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:28:42.428714  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 23:28:42.451162  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:28:42.451241  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 23:28:42.474365  423858 provision.go:86] duration metric: configureAuth took 273.699798ms
	I0108 23:28:42.474397  423858 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:28:42.474640  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:28:42.474741  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:28:42.477612  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.478034  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:28:42.478079  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:28:42.478239  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:28:42.478524  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:42.478728  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:28:42.478892  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:28:42.479060  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:28:42.479409  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:28:42.479427  423858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:30:13.001608  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:30:13.001655  423858 machine.go:91] provisioned docker machine in 1m31.06948006s
	I0108 23:30:13.001672  423858 start.go:300] post-start starting for "multinode-266395-m02" (driver="kvm2")
	I0108 23:30:13.001746  423858 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:30:13.001775  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:30:13.002202  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:30:13.002235  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:30:13.005545  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.005968  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:30:13.006000  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.006167  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:30:13.006397  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:30:13.006600  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:30:13.006740  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:30:13.097723  423858 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:30:13.102395  423858 command_runner.go:130] > NAME=Buildroot
	I0108 23:30:13.102424  423858 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 23:30:13.102431  423858 command_runner.go:130] > ID=buildroot
	I0108 23:30:13.102440  423858 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 23:30:13.102447  423858 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 23:30:13.102502  423858 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:30:13.102522  423858 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:30:13.102615  423858 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:30:13.102722  423858 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:30:13.102737  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /etc/ssl/certs/4070942.pem
	I0108 23:30:13.102849  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:30:13.111189  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:30:13.135530  423858 start.go:303] post-start completed in 133.841278ms
	I0108 23:30:13.135561  423858 fix.go:56] fixHost completed within 1m31.226532547s
	I0108 23:30:13.135591  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:30:13.138571  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.139018  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:30:13.139120  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.139265  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:30:13.139532  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:30:13.139723  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:30:13.139857  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:30:13.140023  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:30:13.140388  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0108 23:30:13.140403  423858 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:30:13.260286  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704756613.252017086
	
	I0108 23:30:13.260307  423858 fix.go:206] guest clock: 1704756613.252017086
	I0108 23:30:13.260315  423858 fix.go:219] Guest: 2024-01-08 23:30:13.252017086 +0000 UTC Remote: 2024-01-08 23:30:13.135566647 +0000 UTC m=+453.982655635 (delta=116.450439ms)
	I0108 23:30:13.260331  423858 fix.go:190] guest clock delta is within tolerance: 116.450439ms
	I0108 23:30:13.260336  423858 start.go:83] releasing machines lock for "multinode-266395-m02", held for 1m31.35132306s
	I0108 23:30:13.260356  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:30:13.260670  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:30:13.263642  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.264023  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:30:13.264051  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.266100  423858 out.go:177] * Found network options:
	I0108 23:30:13.267503  423858 out.go:177]   - NO_PROXY=192.168.39.18
	W0108 23:30:13.268895  423858 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:30:13.268921  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:30:13.269455  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:30:13.269659  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:30:13.269741  423858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:30:13.269771  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	W0108 23:30:13.269851  423858 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:30:13.269964  423858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:30:13.269993  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:30:13.272442  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.272838  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:30:13.272881  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.272936  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.273066  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:30:13.273240  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:30:13.273418  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:30:13.273440  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:30:13.273468  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:13.273621  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:30:13.273610  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:30:13.273750  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:30:13.273882  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:30:13.273986  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:30:13.394166  423858 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:30:13.520378  423858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:30:13.526346  423858 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 23:30:13.526594  423858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:30:13.526669  423858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:30:13.536218  423858 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 23:30:13.536244  423858 start.go:475] detecting cgroup driver to use...
	I0108 23:30:13.536339  423858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:30:13.551146  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:30:13.565063  423858 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:30:13.565122  423858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:30:13.578601  423858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:30:13.591658  423858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:30:13.732931  423858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:30:13.880265  423858 docker.go:219] disabling docker service ...
	I0108 23:30:13.880330  423858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:30:13.894889  423858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:30:13.908792  423858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:30:14.029611  423858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:30:14.155075  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:30:14.167594  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:30:14.184938  423858 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:30:14.185280  423858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:30:14.185337  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:30:14.194871  423858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:30:14.194927  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:30:14.204197  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:30:14.213175  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:30:14.222180  423858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:30:14.231652  423858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:30:14.239574  423858 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 23:30:14.240079  423858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:30:14.248281  423858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:30:14.372256  423858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:30:14.845068  423858 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:30:14.845150  423858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:30:14.850899  423858 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:30:14.850922  423858 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:30:14.850933  423858 command_runner.go:130] > Device: 16h/22d	Inode: 1192        Links: 1
	I0108 23:30:14.850940  423858 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:30:14.850945  423858 command_runner.go:130] > Access: 2024-01-08 23:30:14.767719051 +0000
	I0108 23:30:14.850953  423858 command_runner.go:130] > Modify: 2024-01-08 23:30:14.767719051 +0000
	I0108 23:30:14.850961  423858 command_runner.go:130] > Change: 2024-01-08 23:30:14.767719051 +0000
	I0108 23:30:14.850968  423858 command_runner.go:130] >  Birth: -
	I0108 23:30:14.851239  423858 start.go:543] Will wait 60s for crictl version
	I0108 23:30:14.851307  423858 ssh_runner.go:195] Run: which crictl
	I0108 23:30:14.855069  423858 command_runner.go:130] > /usr/bin/crictl
	I0108 23:30:14.855157  423858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:30:14.901391  423858 command_runner.go:130] > Version:  0.1.0
	I0108 23:30:14.901415  423858 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:30:14.901425  423858 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 23:30:14.901430  423858 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:30:14.901523  423858 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:30:14.901614  423858 ssh_runner.go:195] Run: crio --version
	I0108 23:30:14.953499  423858 command_runner.go:130] > crio version 1.24.1
	I0108 23:30:14.953523  423858 command_runner.go:130] > Version:          1.24.1
	I0108 23:30:14.953530  423858 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:30:14.953537  423858 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:30:14.953543  423858 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:30:14.953548  423858 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:30:14.953555  423858 command_runner.go:130] > Compiler:         gc
	I0108 23:30:14.953562  423858 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:30:14.953571  423858 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:30:14.953582  423858 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:30:14.953590  423858 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:30:14.953600  423858 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:30:14.953839  423858 ssh_runner.go:195] Run: crio --version
	I0108 23:30:15.002545  423858 command_runner.go:130] > crio version 1.24.1
	I0108 23:30:15.002595  423858 command_runner.go:130] > Version:          1.24.1
	I0108 23:30:15.002607  423858 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:30:15.002615  423858 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:30:15.002625  423858 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:30:15.002633  423858 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:30:15.002644  423858 command_runner.go:130] > Compiler:         gc
	I0108 23:30:15.002655  423858 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:30:15.002666  423858 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:30:15.002681  423858 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:30:15.002692  423858 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:30:15.002699  423858 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:30:15.006241  423858 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 23:30:15.007929  423858 out.go:177]   - env NO_PROXY=192.168.39.18
	I0108 23:30:15.009529  423858 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:30:15.012545  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:15.012946  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:30:15.012978  423858 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:30:15.013204  423858 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:30:15.017654  423858 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0108 23:30:15.017746  423858 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395 for IP: 192.168.39.214
	I0108 23:30:15.017770  423858 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:30:15.017934  423858 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:30:15.017981  423858 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:30:15.017998  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:30:15.018019  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:30:15.018036  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:30:15.018055  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:30:15.018122  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:30:15.018165  423858 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:30:15.018184  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:30:15.018224  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:30:15.018251  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:30:15.018287  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:30:15.018343  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:30:15.018376  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /usr/share/ca-certificates/4070942.pem
	I0108 23:30:15.018395  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:30:15.018414  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem -> /usr/share/ca-certificates/407094.pem
	I0108 23:30:15.018883  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:30:15.041730  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:30:15.064863  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:30:15.088271  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:30:15.111783  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:30:15.133842  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:30:15.156109  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:30:15.178564  423858 ssh_runner.go:195] Run: openssl version
	I0108 23:30:15.184112  423858 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 23:30:15.184352  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:30:15.193996  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:30:15.198552  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:30:15.198850  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:30:15.198915  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:30:15.204486  423858 command_runner.go:130] > 3ec20f2e
	I0108 23:30:15.204565  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:30:15.212632  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:30:15.223749  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:30:15.228384  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:30:15.228495  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:30:15.228557  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:30:15.234179  423858 command_runner.go:130] > b5213941
	I0108 23:30:15.234270  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:30:15.243819  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:30:15.254569  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:30:15.259471  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:30:15.259493  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:30:15.259538  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:30:15.265234  423858 command_runner.go:130] > 51391683
	I0108 23:30:15.265294  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:30:15.275070  423858 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:30:15.279406  423858 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:30:15.279452  423858 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:30:15.279546  423858 ssh_runner.go:195] Run: crio config
	I0108 23:30:15.336625  423858 command_runner.go:130] ! time="2024-01-08 23:30:15.328443955Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 23:30:15.336812  423858 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:30:15.342450  423858 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:30:15.342479  423858 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:30:15.342489  423858 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:30:15.342494  423858 command_runner.go:130] > #
	I0108 23:30:15.342507  423858 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:30:15.342517  423858 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:30:15.342527  423858 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:30:15.342538  423858 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:30:15.342544  423858 command_runner.go:130] > # reload'.
	I0108 23:30:15.342553  423858 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:30:15.342567  423858 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:30:15.342578  423858 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:30:15.342592  423858 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:30:15.342598  423858 command_runner.go:130] > [crio]
	I0108 23:30:15.342610  423858 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:30:15.342621  423858 command_runner.go:130] > # containers images, in this directory.
	I0108 23:30:15.342629  423858 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 23:30:15.342649  423858 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:30:15.342660  423858 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 23:30:15.342670  423858 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:30:15.342684  423858 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:30:15.342694  423858 command_runner.go:130] > storage_driver = "overlay"
	I0108 23:30:15.342704  423858 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:30:15.342716  423858 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:30:15.342727  423858 command_runner.go:130] > storage_option = [
	I0108 23:30:15.342738  423858 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 23:30:15.342746  423858 command_runner.go:130] > ]
	I0108 23:30:15.342756  423858 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:30:15.342769  423858 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:30:15.342777  423858 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:30:15.342789  423858 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:30:15.342801  423858 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:30:15.342810  423858 command_runner.go:130] > # always happen on a node reboot
	I0108 23:30:15.342821  423858 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:30:15.342833  423858 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:30:15.342846  423858 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:30:15.342891  423858 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:30:15.342903  423858 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:30:15.342919  423858 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:30:15.342934  423858 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:30:15.342944  423858 command_runner.go:130] > # internal_wipe = true
	I0108 23:30:15.342953  423858 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:30:15.342965  423858 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:30:15.342977  423858 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:30:15.342986  423858 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:30:15.342999  423858 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:30:15.343007  423858 command_runner.go:130] > [crio.api]
	I0108 23:30:15.343015  423858 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:30:15.343026  423858 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:30:15.343037  423858 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:30:15.343047  423858 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:30:15.343064  423858 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:30:15.343074  423858 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:30:15.343081  423858 command_runner.go:130] > # stream_port = "0"
	I0108 23:30:15.343089  423858 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:30:15.343095  423858 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:30:15.343102  423858 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:30:15.343109  423858 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:30:15.343117  423858 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:30:15.343125  423858 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:30:15.343129  423858 command_runner.go:130] > # minutes.
	I0108 23:30:15.343134  423858 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:30:15.343143  423858 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:30:15.343150  423858 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:30:15.343156  423858 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:30:15.343162  423858 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:30:15.343170  423858 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:30:15.343176  423858 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:30:15.343182  423858 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:30:15.343190  423858 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:30:15.343196  423858 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 23:30:15.343203  423858 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:30:15.343210  423858 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 23:30:15.343225  423858 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:30:15.343233  423858 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:30:15.343237  423858 command_runner.go:130] > [crio.runtime]
	I0108 23:30:15.343246  423858 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:30:15.343251  423858 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:30:15.343258  423858 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:30:15.343264  423858 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:30:15.343270  423858 command_runner.go:130] > # default_ulimits = [
	I0108 23:30:15.343273  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343281  423858 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:30:15.343286  423858 command_runner.go:130] > # no_pivot = false
	I0108 23:30:15.343294  423858 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:30:15.343300  423858 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:30:15.343309  423858 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:30:15.343318  423858 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:30:15.343323  423858 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:30:15.343331  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:30:15.343336  423858 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 23:30:15.343344  423858 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:30:15.343350  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:30:15.343372  423858 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:30:15.343384  423858 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:30:15.343396  423858 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:30:15.343405  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:30:15.343412  423858 command_runner.go:130] > conmon_env = [
	I0108 23:30:15.343417  423858 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 23:30:15.343423  423858 command_runner.go:130] > ]
	I0108 23:30:15.343440  423858 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:30:15.343452  423858 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:30:15.343461  423858 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:30:15.343470  423858 command_runner.go:130] > # default_env = [
	I0108 23:30:15.343476  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343488  423858 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:30:15.343498  423858 command_runner.go:130] > # selinux = false
	I0108 23:30:15.343510  423858 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:30:15.343523  423858 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:30:15.343535  423858 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:30:15.343544  423858 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:30:15.343553  423858 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:30:15.343566  423858 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:30:15.343575  423858 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:30:15.343582  423858 command_runner.go:130] > # which might increase security.
	I0108 23:30:15.343587  423858 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 23:30:15.343596  423858 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:30:15.343603  423858 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:30:15.343611  423858 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:30:15.343618  423858 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:30:15.343625  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:30:15.343630  423858 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:30:15.343639  423858 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:30:15.343646  423858 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:30:15.343651  423858 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:30:15.343658  423858 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:30:15.343664  423858 command_runner.go:130] > # irqbalance daemon.
	I0108 23:30:15.343670  423858 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:30:15.343678  423858 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:30:15.343683  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:30:15.343690  423858 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:30:15.343695  423858 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:30:15.343702  423858 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:30:15.343708  423858 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:30:15.343714  423858 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:30:15.343720  423858 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:30:15.343728  423858 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:30:15.343734  423858 command_runner.go:130] > # will be added.
	I0108 23:30:15.343739  423858 command_runner.go:130] > # default_capabilities = [
	I0108 23:30:15.343744  423858 command_runner.go:130] > # 	"CHOWN",
	I0108 23:30:15.343749  423858 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:30:15.343756  423858 command_runner.go:130] > # 	"FSETID",
	I0108 23:30:15.343759  423858 command_runner.go:130] > # 	"FOWNER",
	I0108 23:30:15.343767  423858 command_runner.go:130] > # 	"SETGID",
	I0108 23:30:15.343771  423858 command_runner.go:130] > # 	"SETUID",
	I0108 23:30:15.343778  423858 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:30:15.343782  423858 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:30:15.343787  423858 command_runner.go:130] > # 	"KILL",
	I0108 23:30:15.343791  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343800  423858 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:30:15.343806  423858 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:30:15.343813  423858 command_runner.go:130] > # default_sysctls = [
	I0108 23:30:15.343817  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343822  423858 command_runner.go:130] > # List of devices on the host that a
	I0108 23:30:15.343828  423858 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:30:15.343835  423858 command_runner.go:130] > # allowed_devices = [
	I0108 23:30:15.343839  423858 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:30:15.343842  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343853  423858 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:30:15.343861  423858 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:30:15.343869  423858 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:30:15.343897  423858 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:30:15.343904  423858 command_runner.go:130] > # additional_devices = [
	I0108 23:30:15.343907  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343915  423858 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:30:15.343920  423858 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:30:15.343926  423858 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:30:15.343930  423858 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:30:15.343935  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343941  423858 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:30:15.343947  423858 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:30:15.343954  423858 command_runner.go:130] > # Defaults to false.
	I0108 23:30:15.343959  423858 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:30:15.343967  423858 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:30:15.343973  423858 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:30:15.343979  423858 command_runner.go:130] > # hooks_dir = [
	I0108 23:30:15.343984  423858 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:30:15.343988  423858 command_runner.go:130] > # ]
	I0108 23:30:15.343994  423858 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:30:15.344003  423858 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:30:15.344010  423858 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:30:15.344016  423858 command_runner.go:130] > #
	I0108 23:30:15.344022  423858 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:30:15.344031  423858 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:30:15.344037  423858 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:30:15.344042  423858 command_runner.go:130] > #
	I0108 23:30:15.344048  423858 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:30:15.344059  423858 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:30:15.344068  423858 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:30:15.344073  423858 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:30:15.344079  423858 command_runner.go:130] > #
	I0108 23:30:15.344083  423858 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:30:15.344089  423858 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:30:15.344097  423858 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:30:15.344102  423858 command_runner.go:130] > pids_limit = 1024
	I0108 23:30:15.344110  423858 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:30:15.344116  423858 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:30:15.344124  423858 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:30:15.344132  423858 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:30:15.344139  423858 command_runner.go:130] > # log_size_max = -1
	I0108 23:30:15.344147  423858 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:30:15.344154  423858 command_runner.go:130] > # log_to_journald = false
	I0108 23:30:15.344160  423858 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:30:15.344167  423858 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:30:15.344172  423858 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:30:15.344180  423858 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:30:15.344186  423858 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:30:15.344192  423858 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:30:15.344198  423858 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:30:15.344202  423858 command_runner.go:130] > # read_only = false
	I0108 23:30:15.344208  423858 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:30:15.344214  423858 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:30:15.344220  423858 command_runner.go:130] > # live configuration reload.
	I0108 23:30:15.344224  423858 command_runner.go:130] > # log_level = "info"
	I0108 23:30:15.344229  423858 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:30:15.344234  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:30:15.344238  423858 command_runner.go:130] > # log_filter = ""
	I0108 23:30:15.344244  423858 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:30:15.344250  423858 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:30:15.344254  423858 command_runner.go:130] > # separated by comma.
	I0108 23:30:15.344258  423858 command_runner.go:130] > # uid_mappings = ""
	I0108 23:30:15.344264  423858 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:30:15.344270  423858 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:30:15.344274  423858 command_runner.go:130] > # separated by comma.
	I0108 23:30:15.344278  423858 command_runner.go:130] > # gid_mappings = ""
	I0108 23:30:15.344284  423858 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:30:15.344293  423858 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:30:15.344298  423858 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:30:15.344303  423858 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:30:15.344309  423858 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:30:15.344320  423858 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:30:15.344328  423858 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:30:15.344333  423858 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:30:15.344341  423858 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:30:15.344347  423858 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:30:15.344355  423858 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:30:15.344359  423858 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:30:15.344367  423858 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:30:15.344373  423858 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:30:15.344377  423858 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:30:15.344385  423858 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:30:15.344390  423858 command_runner.go:130] > drop_infra_ctr = false
	I0108 23:30:15.344399  423858 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:30:15.344405  423858 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:30:15.344414  423858 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:30:15.344418  423858 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:30:15.344424  423858 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:30:15.344429  423858 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:30:15.344440  423858 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:30:15.344454  423858 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:30:15.344465  423858 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 23:30:15.344477  423858 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:30:15.344490  423858 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:30:15.344502  423858 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:30:15.344511  423858 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:30:15.344520  423858 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:30:15.344533  423858 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:30:15.344546  423858 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:30:15.344554  423858 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:30:15.344562  423858 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:30:15.344570  423858 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:30:15.344574  423858 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:30:15.344580  423858 command_runner.go:130] > # ]
	I0108 23:30:15.344586  423858 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:30:15.344593  423858 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:30:15.344601  423858 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:30:15.344611  423858 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:30:15.344617  423858 command_runner.go:130] > #
	I0108 23:30:15.344622  423858 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:30:15.344629  423858 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:30:15.344634  423858 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:30:15.344639  423858 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:30:15.344646  423858 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:30:15.344653  423858 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:30:15.344657  423858 command_runner.go:130] > # Where:
	I0108 23:30:15.344664  423858 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:30:15.344671  423858 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:30:15.344679  423858 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:30:15.344685  423858 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:30:15.344691  423858 command_runner.go:130] > #   in $PATH.
	I0108 23:30:15.344697  423858 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:30:15.344704  423858 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:30:15.344710  423858 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:30:15.344716  423858 command_runner.go:130] > #   state.
	I0108 23:30:15.344724  423858 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:30:15.344732  423858 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:30:15.344738  423858 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:30:15.344746  423858 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:30:15.344752  423858 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:30:15.344761  423858 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:30:15.344765  423858 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:30:15.344774  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:30:15.344781  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:30:15.344789  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:30:15.344795  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:30:15.344804  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:30:15.344812  423858 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:30:15.344820  423858 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:30:15.344827  423858 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:30:15.344834  423858 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:30:15.344838  423858 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:30:15.344844  423858 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 23:30:15.344853  423858 command_runner.go:130] > runtime_type = "oci"
	I0108 23:30:15.344860  423858 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:30:15.344864  423858 command_runner.go:130] > runtime_config_path = ""
	I0108 23:30:15.344871  423858 command_runner.go:130] > monitor_path = ""
	I0108 23:30:15.344875  423858 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:30:15.344879  423858 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:30:15.344884  423858 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:30:15.344888  423858 command_runner.go:130] > # running containers
	I0108 23:30:15.344894  423858 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:30:15.344900  423858 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:30:15.344929  423858 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:30:15.344938  423858 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:30:15.344943  423858 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:30:15.344950  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:30:15.344955  423858 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:30:15.344961  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:30:15.344966  423858 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:30:15.344973  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:30:15.344981  423858 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:30:15.344989  423858 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:30:15.344995  423858 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:30:15.345002  423858 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:30:15.345011  423858 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:30:15.345017  423858 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:30:15.345028  423858 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:30:15.345038  423858 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:30:15.345046  423858 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:30:15.345053  423858 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:30:15.345063  423858 command_runner.go:130] > # Example:
	I0108 23:30:15.345069  423858 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:30:15.345076  423858 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:30:15.345081  423858 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:30:15.345088  423858 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:30:15.345092  423858 command_runner.go:130] > # cpuset = 0
	I0108 23:30:15.345096  423858 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:30:15.345102  423858 command_runner.go:130] > # Where:
	I0108 23:30:15.345110  423858 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:30:15.345119  423858 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:30:15.345125  423858 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:30:15.345134  423858 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:30:15.345142  423858 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:30:15.345150  423858 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:30:15.345154  423858 command_runner.go:130] > # 
	I0108 23:30:15.345160  423858 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:30:15.345166  423858 command_runner.go:130] > #
	I0108 23:30:15.345172  423858 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:30:15.345180  423858 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:30:15.345187  423858 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:30:15.345195  423858 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:30:15.345201  423858 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:30:15.345207  423858 command_runner.go:130] > [crio.image]
	I0108 23:30:15.345213  423858 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:30:15.345219  423858 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:30:15.345226  423858 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:30:15.345236  423858 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:30:15.345241  423858 command_runner.go:130] > # global_auth_file = ""
	I0108 23:30:15.345246  423858 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:30:15.345253  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:30:15.345259  423858 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:30:15.345267  423858 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:30:15.345273  423858 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:30:15.345280  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:30:15.345285  423858 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:30:15.345293  423858 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:30:15.345299  423858 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:30:15.345307  423858 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:30:15.345313  423858 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:30:15.345319  423858 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:30:15.345325  423858 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:30:15.345334  423858 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:30:15.345340  423858 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:30:15.345348  423858 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:30:15.345356  423858 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:30:15.345362  423858 command_runner.go:130] > # signature_policy = ""
	I0108 23:30:15.345368  423858 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:30:15.345374  423858 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:30:15.345379  423858 command_runner.go:130] > # changing them here.
	I0108 23:30:15.345383  423858 command_runner.go:130] > # insecure_registries = [
	I0108 23:30:15.345387  423858 command_runner.go:130] > # ]
	I0108 23:30:15.345395  423858 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:30:15.345403  423858 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:30:15.345407  423858 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:30:15.345413  423858 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:30:15.345417  423858 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:30:15.345423  423858 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:30:15.345430  423858 command_runner.go:130] > # CNI plugins.
	I0108 23:30:15.345436  423858 command_runner.go:130] > [crio.network]
	I0108 23:30:15.345447  423858 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:30:15.345459  423858 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:30:15.345470  423858 command_runner.go:130] > # cni_default_network = ""
	I0108 23:30:15.345479  423858 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:30:15.345489  423858 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:30:15.345500  423858 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:30:15.345510  423858 command_runner.go:130] > # plugin_dirs = [
	I0108 23:30:15.345517  423858 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:30:15.345525  423858 command_runner.go:130] > # ]
	I0108 23:30:15.345535  423858 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:30:15.345544  423858 command_runner.go:130] > [crio.metrics]
	I0108 23:30:15.345554  423858 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:30:15.345564  423858 command_runner.go:130] > enable_metrics = true
	I0108 23:30:15.345571  423858 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:30:15.345579  423858 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:30:15.345586  423858 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:30:15.345594  423858 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:30:15.345600  423858 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:30:15.345607  423858 command_runner.go:130] > # metrics_collectors = [
	I0108 23:30:15.345611  423858 command_runner.go:130] > # 	"operations",
	I0108 23:30:15.345618  423858 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:30:15.345624  423858 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:30:15.345630  423858 command_runner.go:130] > # 	"operations_errors",
	I0108 23:30:15.345635  423858 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:30:15.345642  423858 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:30:15.345646  423858 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:30:15.345652  423858 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:30:15.345657  423858 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:30:15.345661  423858 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:30:15.345665  423858 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:30:15.345671  423858 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:30:15.345676  423858 command_runner.go:130] > # 	"containers_oom",
	I0108 23:30:15.345682  423858 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:30:15.345686  423858 command_runner.go:130] > # 	"operations_total",
	I0108 23:30:15.345692  423858 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:30:15.345697  423858 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:30:15.345704  423858 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:30:15.345708  423858 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:30:15.345714  423858 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:30:15.345719  423858 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:30:15.345725  423858 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:30:15.345737  423858 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:30:15.345744  423858 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:30:15.345748  423858 command_runner.go:130] > # ]
	I0108 23:30:15.345753  423858 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:30:15.345758  423858 command_runner.go:130] > # metrics_port = 9090
	I0108 23:30:15.345763  423858 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:30:15.345769  423858 command_runner.go:130] > # metrics_socket = ""
	I0108 23:30:15.345775  423858 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:30:15.345783  423858 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:30:15.345789  423858 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:30:15.345796  423858 command_runner.go:130] > # certificate on any modification event.
	I0108 23:30:15.345800  423858 command_runner.go:130] > # metrics_cert = ""
	I0108 23:30:15.345805  423858 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:30:15.345810  423858 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:30:15.345814  423858 command_runner.go:130] > # metrics_key = ""
	I0108 23:30:15.345819  423858 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:30:15.345823  423858 command_runner.go:130] > [crio.tracing]
	I0108 23:30:15.345829  423858 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:30:15.345833  423858 command_runner.go:130] > # enable_tracing = false
	I0108 23:30:15.345838  423858 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:30:15.345842  423858 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:30:15.345847  423858 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:30:15.345852  423858 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:30:15.345858  423858 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:30:15.345861  423858 command_runner.go:130] > [crio.stats]
	I0108 23:30:15.345867  423858 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:30:15.345872  423858 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:30:15.345876  423858 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:30:15.345951  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:30:15.345957  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:30:15.345968  423858 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:30:15.345986  423858 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266395 NodeName:multinode-266395-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:30:15.346119  423858 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266395-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:30:15.346173  423858 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-266395-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:30:15.346222  423858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:30:15.357921  423858 command_runner.go:130] > kubeadm
	I0108 23:30:15.357941  423858 command_runner.go:130] > kubectl
	I0108 23:30:15.357948  423858 command_runner.go:130] > kubelet
	I0108 23:30:15.357968  423858 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:30:15.358032  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 23:30:15.367716  423858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 23:30:15.383102  423858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:30:15.400932  423858 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0108 23:30:15.405120  423858 command_runner.go:130] > 192.168.39.18	control-plane.minikube.internal
	I0108 23:30:15.405295  423858 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:30:15.405666  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:30:15.405763  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:30:15.405810  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:30:15.421384  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0108 23:30:15.421916  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:30:15.422413  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:30:15.422435  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:30:15.422787  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:30:15.422975  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:30:15.423105  423858 start.go:304] JoinCluster: &{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:30:15.423246  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 23:30:15.423270  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:30:15.426010  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:30:15.426405  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:30:15.426452  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:30:15.426720  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:30:15.426925  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:30:15.427065  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:30:15.427193  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:30:15.616931  423858 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token n1sxdq.t1uerxyu67uvzy3d --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 23:30:15.624616  423858 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:30:15.624687  423858 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:30:15.625133  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:30:15.625195  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:30:15.644931  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0108 23:30:15.645464  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:30:15.646092  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:30:15.646127  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:30:15.646520  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:30:15.646765  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:30:15.647013  423858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-266395-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 23:30:15.647036  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:30:15.650303  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:30:15.650902  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:30:15.650928  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:30:15.651199  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:30:15.651424  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:30:15.651582  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:30:15.651775  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:30:15.823306  423858 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 23:30:15.877542  423858 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-fcjt6, kube-system/kube-proxy-v4q5n
	I0108 23:30:18.903735  423858 command_runner.go:130] > node/multinode-266395-m02 cordoned
	I0108 23:30:18.903786  423858 command_runner.go:130] > pod "busybox-5bc68d56bd-wz22p" has DeletionTimestamp older than 1 seconds, skipping
	I0108 23:30:18.903798  423858 command_runner.go:130] > node/multinode-266395-m02 drained
	I0108 23:30:18.903885  423858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-266395-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.256829557s)
	I0108 23:30:18.903960  423858 node.go:108] successfully drained node "m02"
	I0108 23:30:18.904439  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:30:18.904756  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:30:18.905301  423858 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 23:30:18.905367  423858 round_trippers.go:463] DELETE https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:30:18.905374  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:18.905387  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:18.905424  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:18.905438  423858 round_trippers.go:473]     Content-Type: application/json
	I0108 23:30:18.920770  423858 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0108 23:30:18.920799  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:18.920809  423858 round_trippers.go:580]     Content-Length: 171
	I0108 23:30:18.920818  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:18 GMT
	I0108 23:30:18.920826  423858 round_trippers.go:580]     Audit-Id: 50580f9c-7946-4898-bf99-71533cc8ae10
	I0108 23:30:18.920834  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:18.920842  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:18.920850  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:18.920858  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:18.921032  423858 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-266395-m02","kind":"nodes","uid":"634208e7-068a-4df5-978c-942779812c38"}}
	I0108 23:30:18.921080  423858 node.go:124] successfully deleted node "m02"
	I0108 23:30:18.921091  423858 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:30:18.921112  423858 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:30:18.921131  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1sxdq.t1uerxyu67uvzy3d --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-266395-m02"
	I0108 23:30:18.980885  423858 command_runner.go:130] ! W0108 23:30:18.972574    2665 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 23:30:18.981163  423858 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 23:30:19.137832  423858 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 23:30:19.137877  423858 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 23:30:19.869364  423858 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 23:30:19.869401  423858 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 23:30:19.869415  423858 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 23:30:19.869427  423858 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:30:19.869439  423858 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:30:19.869447  423858 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:30:19.869474  423858 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 23:30:19.869492  423858 command_runner.go:130] > This node has joined the cluster:
	I0108 23:30:19.869504  423858 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 23:30:19.869513  423858 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 23:30:19.869523  423858 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 23:30:19.869562  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 23:30:20.124815  423858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-266395 minikube.k8s.io/updated_at=2024_01_08T23_30_20_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:30:20.225526  423858 command_runner.go:130] > node/multinode-266395-m02 labeled
	I0108 23:30:20.242184  423858 command_runner.go:130] > node/multinode-266395-m03 labeled
	I0108 23:30:20.245062  423858 start.go:306] JoinCluster complete in 4.821953086s
	I0108 23:30:20.245086  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:30:20.245092  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:30:20.245148  423858 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:30:20.256342  423858 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:30:20.256375  423858 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 23:30:20.256384  423858 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 23:30:20.256394  423858 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:30:20.256407  423858 command_runner.go:130] > Access: 2024-01-08 23:27:50.050727036 +0000
	I0108 23:30:20.256420  423858 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 23:30:20.256432  423858 command_runner.go:130] > Change: 2024-01-08 23:27:48.185727036 +0000
	I0108 23:30:20.256441  423858 command_runner.go:130] >  Birth: -
	I0108 23:30:20.256498  423858 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:30:20.256513  423858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:30:20.278510  423858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:30:20.645565  423858 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:30:20.649323  423858 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:30:20.651768  423858 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 23:30:20.663844  423858 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 23:30:20.667175  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:30:20.667529  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:30:20.668014  423858 round_trippers.go:463] GET https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:30:20.668030  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.668041  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.668051  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.671479  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:20.671497  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.671507  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.671514  423858 round_trippers.go:580]     Audit-Id: a5ea778c-47cf-41a1-85b8-a93becac66c8
	I0108 23:30:20.671522  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.671530  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.671542  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.671550  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.671558  423858 round_trippers.go:580]     Content-Length: 291
	I0108 23:30:20.671760  423858 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"884","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 23:30:20.671884  423858 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266395" context rescaled to 1 replicas
	I0108 23:30:20.671925  423858 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 23:30:20.674561  423858 out.go:177] * Verifying Kubernetes components...
	I0108 23:30:20.675927  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:30:20.690649  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:30:20.690977  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:30:20.691290  423858 node_ready.go:35] waiting up to 6m0s for node "multinode-266395-m02" to be "Ready" ...
	I0108 23:30:20.691413  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:30:20.691426  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.691438  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.691448  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.694117  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:30:20.694142  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.694152  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.694160  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.694168  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.694177  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.694185  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.694195  423858 round_trippers.go:580]     Audit-Id: 0ef1a821-abe5-4526-a8bd-d908cf76ef90
	I0108 23:30:20.694341  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"ac068e1a-04e7-4b19-9f0f-13e0f582f5a0","resourceVersion":"1028","creationTimestamp":"2024-01-08T23:30:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_30_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:30:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 23:30:20.694633  423858 node_ready.go:49] node "multinode-266395-m02" has status "Ready":"True"
	I0108 23:30:20.694651  423858 node_ready.go:38] duration metric: took 3.341387ms waiting for node "multinode-266395-m02" to be "Ready" ...
	I0108 23:30:20.694664  423858 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:30:20.694738  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:30:20.694748  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.694759  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.694769  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.702306  423858 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 23:30:20.702327  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.702337  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.702346  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.702355  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.702363  423858 round_trippers.go:580]     Audit-Id: 62223933-389f-44c6-8b85-0ce6c623c357
	I0108 23:30:20.702379  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.702387  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.705113  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1035"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82206 chars]
	I0108 23:30:20.707675  423858 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.707772  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:30:20.707782  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.707794  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.707806  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.711652  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:20.711667  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.711674  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.711679  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.711684  423858 round_trippers.go:580]     Audit-Id: 3ae55a76-cb56-4228-a1a9-ab66b8040257
	I0108 23:30:20.711689  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.711694  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.711699  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.711994  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0108 23:30:20.712384  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:20.712398  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.712409  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.712417  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.715177  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:30:20.715191  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.715197  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.715202  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.715207  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.715212  423858 round_trippers.go:580]     Audit-Id: c63f9763-436a-4c81-9d42-69eed3c92c27
	I0108 23:30:20.715217  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.715223  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.715867  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:30:20.716175  423858 pod_ready.go:92] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:20.716191  423858 pod_ready.go:81] duration metric: took 8.494462ms waiting for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.716202  423858 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.716271  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266395
	I0108 23:30:20.716279  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.716286  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.716295  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.719602  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:20.719627  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.719637  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.719646  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.719653  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.719662  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.719671  423858 round_trippers.go:580]     Audit-Id: c203adbc-5c2d-4030-a46d-a51de18c993e
	I0108 23:30:20.719681  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.719877  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"865","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0108 23:30:20.720330  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:20.720350  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.720360  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.720369  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.724258  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:20.724277  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.724287  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.724294  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.724302  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.724310  423858 round_trippers.go:580]     Audit-Id: 8a7a5c01-d4ed-4422-982d-d9bb8612e177
	I0108 23:30:20.724321  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.724331  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.724592  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:30:20.724911  423858 pod_ready.go:92] pod "etcd-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:20.724928  423858 pod_ready.go:81] duration metric: took 8.716064ms waiting for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.724945  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.724996  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266395
	I0108 23:30:20.725000  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.725007  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.725016  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.726939  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:30:20.726955  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.726963  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.726971  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.726978  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.726986  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.726994  423858 round_trippers.go:580]     Audit-Id: c3b80387-196b-406e-a08b-fc54fea78534
	I0108 23:30:20.727003  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.727289  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266395","namespace":"kube-system","uid":"70b0f39e-3999-4a5b-bae6-c08ae2adeb49","resourceVersion":"860","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.18:8443","kubernetes.io/config.hash":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.mirror":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.seen":"2024-01-08T23:17:58.693588503Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0108 23:30:20.727706  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:20.727723  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.727730  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.727736  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.729724  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:30:20.729739  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.729748  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.729756  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.729763  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.729772  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.729781  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.729794  423858 round_trippers.go:580]     Audit-Id: 9f7a32b1-a367-4fef-a6dd-15249a39b2ce
	I0108 23:30:20.729963  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:30:20.730324  423858 pod_ready.go:92] pod "kube-apiserver-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:20.730343  423858 pod_ready.go:81] duration metric: took 5.38683ms waiting for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.730355  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.730415  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266395
	I0108 23:30:20.730426  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.730436  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.730447  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.732379  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:30:20.732395  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.732404  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.732412  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.732420  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.732428  423858 round_trippers.go:580]     Audit-Id: 49e10561-7f5c-4006-9c2e-9bdbd1b3b2ad
	I0108 23:30:20.732436  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.732443  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.732595  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266395","namespace":"kube-system","uid":"32b7c02b-f69c-46ac-ab67-d61a4077b5b2","resourceVersion":"850","creationTimestamp":"2024-01-08T23:17:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.mirror":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.seen":"2024-01-08T23:17:49.571485221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0108 23:30:20.733031  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:20.733048  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.733059  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.733069  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.734938  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:30:20.734954  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.734963  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.734970  423858 round_trippers.go:580]     Audit-Id: a3fccad6-55d4-4091-9885-88a1ef842fc5
	I0108 23:30:20.734978  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.734985  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.734993  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.735005  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.735245  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:30:20.735620  423858 pod_ready.go:92] pod "kube-controller-manager-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:20.735643  423858 pod_ready.go:81] duration metric: took 5.275481ms waiting for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.735655  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:20.892159  423858 request.go:629] Waited for 156.365326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:30:20.892261  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:30:20.892276  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:20.892293  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:20.892302  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:20.895580  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:20.895600  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:20.895607  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:20.895612  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:20 GMT
	I0108 23:30:20.895618  423858 round_trippers.go:580]     Audit-Id: 95fc3cd5-7aa7-40db-8f43-4c36c3d21278
	I0108 23:30:20.895626  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:20.895635  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:20.895644  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:20.896007  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lvmgf","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c37677d-6832-4d6b-8f29-c23d25347535","resourceVersion":"796","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0108 23:30:21.091940  423858 request.go:629] Waited for 195.401205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:21.092033  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:21.092045  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:21.092061  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:21.092078  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:21.094646  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:30:21.094673  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:21.094682  423858 round_trippers.go:580]     Audit-Id: 15467052-1246-4945-bc7e-58d40dd04dd9
	I0108 23:30:21.094690  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:21.094697  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:21.094704  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:21.094712  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:21.094723  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:21 GMT
	I0108 23:30:21.094964  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:30:21.095424  423858 pod_ready.go:92] pod "kube-proxy-lvmgf" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:21.095446  423858 pod_ready.go:81] duration metric: took 359.7788ms waiting for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:21.095473  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:21.291476  423858 request.go:629] Waited for 195.890968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:30:21.291561  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:30:21.291572  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:21.291583  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:21.291596  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:21.294764  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:21.294791  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:21.294802  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:21.294810  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:21.294841  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:21.294852  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:21 GMT
	I0108 23:30:21.294859  423858 round_trippers.go:580]     Audit-Id: 3e63bf97-ddf0-4fa3-849f-20f9e86676fc
	I0108 23:30:21.294865  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:21.295111  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v4q5n","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ef0ea4c-f518-4179-9c48-4e1628a9752b","resourceVersion":"1033","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0108 23:30:21.492084  423858 request.go:629] Waited for 196.399755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:30:21.492166  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:30:21.492171  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:21.492179  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:21.492188  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:21.494691  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:30:21.494712  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:21.494719  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:21 GMT
	I0108 23:30:21.494725  423858 round_trippers.go:580]     Audit-Id: 578b026e-463c-4140-a619-f0e87ef8b689
	I0108 23:30:21.494730  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:21.494735  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:21.494741  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:21.494750  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:21.494902  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"ac068e1a-04e7-4b19-9f0f-13e0f582f5a0","resourceVersion":"1028","creationTimestamp":"2024-01-08T23:30:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_30_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:30:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 23:30:21.692500  423858 request.go:629] Waited for 96.345119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:30:21.692596  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:30:21.692604  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:21.692615  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:21.692625  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:21.698672  423858 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 23:30:21.698696  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:21.698703  423858 round_trippers.go:580]     Audit-Id: 88dfa7eb-d127-410e-bfc4-a46ceaebd613
	I0108 23:30:21.698709  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:21.698715  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:21.698720  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:21.698725  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:21.698730  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:21 GMT
	I0108 23:30:21.698947  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v4q5n","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ef0ea4c-f518-4179-9c48-4e1628a9752b","resourceVersion":"1045","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 23:30:21.891781  423858 request.go:629] Waited for 192.396239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:30:21.891853  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:30:21.891858  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:21.891865  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:21.891871  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:21.895474  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:21.895496  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:21.895507  423858 round_trippers.go:580]     Audit-Id: a439a671-2d24-42bd-bec7-229b7b115a58
	I0108 23:30:21.895513  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:21.895519  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:21.895525  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:21.895530  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:21.895536  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:21 GMT
	I0108 23:30:21.895773  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"ac068e1a-04e7-4b19-9f0f-13e0f582f5a0","resourceVersion":"1028","creationTimestamp":"2024-01-08T23:30:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_30_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:30:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 23:30:21.896053  423858 pod_ready.go:92] pod "kube-proxy-v4q5n" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:21.896068  423858 pod_ready.go:81] duration metric: took 800.584666ms waiting for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:21.896077  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:22.091582  423858 request.go:629] Waited for 195.404293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:30:22.091654  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:30:22.091662  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:22.091673  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:22.091683  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:22.094864  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:22.094889  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:22.094899  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:22.094911  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:22.094919  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:22.094926  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:22 GMT
	I0108 23:30:22.094933  423858 round_trippers.go:580]     Audit-Id: fd0d815e-1fa7-4182-9762-c71eb10f8613
	I0108 23:30:22.094946  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:22.095354  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vbq4b","generateName":"kube-proxy-","namespace":"kube-system","uid":"f4b0965a-b7bc-4a1a-8fc2-1397277c3710","resourceVersion":"694","creationTimestamp":"2024-01-08T23:19:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:19:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 23:30:22.292247  423858 request.go:629] Waited for 196.348391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:30:22.292332  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:30:22.292339  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:22.292350  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:22.292361  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:22.297912  423858 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 23:30:22.297944  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:22.297956  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:22.297965  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:22.297971  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:22.297976  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:22 GMT
	I0108 23:30:22.297982  423858 round_trippers.go:580]     Audit-Id: d3adcb31-f18d-498d-baf8-5588b31d6a8e
	I0108 23:30:22.297987  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:22.298212  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m03","uid":"9520eb58-7ccf-441c-a72a-288c0fd8fc84","resourceVersion":"1029","creationTimestamp":"2024-01-08T23:20:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_30_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:20:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0108 23:30:22.298517  423858 pod_ready.go:92] pod "kube-proxy-vbq4b" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:22.298537  423858 pod_ready.go:81] duration metric: took 402.453676ms waiting for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:22.298551  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:22.491996  423858 request.go:629] Waited for 193.367246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:30:22.492087  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:30:22.492095  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:22.492105  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:22.492115  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:22.495097  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:30:22.495120  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:22.495127  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:22.495132  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:22.495137  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:22.495142  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:22 GMT
	I0108 23:30:22.495148  423858 round_trippers.go:580]     Audit-Id: 90c1e37c-a4ea-468a-8366-80c9bb16583f
	I0108 23:30:22.495152  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:22.495604  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266395","namespace":"kube-system","uid":"df5e2822-435f-4264-854b-929b6acccd99","resourceVersion":"847","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.mirror":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.seen":"2024-01-08T23:17:58.693594221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0108 23:30:22.692366  423858 request.go:629] Waited for 196.366999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:22.692467  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:30:22.692474  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:22.692482  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:22.692497  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:22.696118  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:30:22.696143  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:22.696155  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:22.696163  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:22 GMT
	I0108 23:30:22.696171  423858 round_trippers.go:580]     Audit-Id: 6cb1837d-3db8-4c9c-91ec-ea940e9f2c61
	I0108 23:30:22.696178  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:22.696184  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:22.696191  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:22.696509  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:30:22.696840  423858 pod_ready.go:92] pod "kube-scheduler-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:30:22.696874  423858 pod_ready.go:81] duration metric: took 398.298016ms waiting for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:30:22.696900  423858 pod_ready.go:38] duration metric: took 2.002218422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:30:22.696925  423858 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:30:22.696990  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:30:22.709618  423858 system_svc.go:56] duration metric: took 12.685469ms WaitForService to wait for kubelet.
	I0108 23:30:22.709654  423858 kubeadm.go:581] duration metric: took 2.037696983s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:30:22.709681  423858 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:30:22.892155  423858 request.go:629] Waited for 182.375893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0108 23:30:22.892217  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0108 23:30:22.892222  423858 round_trippers.go:469] Request Headers:
	I0108 23:30:22.892230  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:30:22.892253  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:30:22.895233  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:30:22.895261  423858 round_trippers.go:577] Response Headers:
	I0108 23:30:22.895272  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:30:22 GMT
	I0108 23:30:22.895281  423858 round_trippers.go:580]     Audit-Id: b367d409-71c2-4cb9-b6aa-b465f54c225c
	I0108 23:30:22.895289  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:30:22.895299  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:30:22.895314  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:30:22.895321  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:30:22.896037  423858 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1051"},"items":[{"metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16210 chars]
	I0108 23:30:22.896697  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:30:22.896719  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:30:22.896735  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:30:22.896739  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:30:22.896745  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:30:22.896749  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:30:22.896754  423858 node_conditions.go:105] duration metric: took 187.068844ms to run NodePressure ...
	I0108 23:30:22.896765  423858 start.go:228] waiting for startup goroutines ...
	I0108 23:30:22.896789  423858 start.go:242] writing updated cluster config ...
	I0108 23:30:22.897210  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:30:22.897289  423858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:30:22.900122  423858 out.go:177] * Starting worker node multinode-266395-m03 in cluster multinode-266395
	I0108 23:30:22.901961  423858 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 23:30:22.901986  423858 cache.go:56] Caching tarball of preloaded images
	I0108 23:30:22.902086  423858 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 23:30:22.902097  423858 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 23:30:22.902191  423858 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/config.json ...
	I0108 23:30:22.902352  423858 start.go:365] acquiring machines lock for multinode-266395-m03: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:30:22.902393  423858 start.go:369] acquired machines lock for "multinode-266395-m03" in 23.998µs
	I0108 23:30:22.902407  423858 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:30:22.902412  423858 fix.go:54] fixHost starting: m03
	I0108 23:30:22.902684  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:30:22.902719  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:30:22.917633  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43049
	I0108 23:30:22.918065  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:30:22.918531  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:30:22.918551  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:30:22.918874  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:30:22.919099  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:30:22.919270  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetState
	I0108 23:30:22.921035  423858 fix.go:102] recreateIfNeeded on multinode-266395-m03: state=Running err=<nil>
	W0108 23:30:22.921053  423858 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:30:22.923002  423858 out.go:177] * Updating the running kvm2 "multinode-266395-m03" VM ...
	I0108 23:30:22.924239  423858 machine.go:88] provisioning docker machine ...
	I0108 23:30:22.924261  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:30:22.924494  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetMachineName
	I0108 23:30:22.924682  423858 buildroot.go:166] provisioning hostname "multinode-266395-m03"
	I0108 23:30:22.924700  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetMachineName
	I0108 23:30:22.924850  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:30:22.927542  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:22.927979  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:30:22.928011  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:22.928198  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:30:22.928367  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:22.928492  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:22.928605  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:30:22.928725  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:30:22.929053  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 23:30:22.929072  423858 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-266395-m03 && echo "multinode-266395-m03" | sudo tee /etc/hostname
	I0108 23:30:23.073880  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-266395-m03
	
	I0108 23:30:23.073906  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:30:23.077157  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.077529  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:30:23.077563  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.077721  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:30:23.077918  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:23.078059  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:23.078161  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:30:23.078293  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:30:23.078645  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 23:30:23.078666  423858 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-266395-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-266395-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-266395-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:30:23.208552  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:30:23.208588  423858 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:30:23.208615  423858 buildroot.go:174] setting up certificates
	I0108 23:30:23.208628  423858 provision.go:83] configureAuth start
	I0108 23:30:23.208644  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetMachineName
	I0108 23:30:23.208932  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetIP
	I0108 23:30:23.211613  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.211989  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:30:23.212019  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.212196  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:30:23.214381  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.214818  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:30:23.214849  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.215024  423858 provision.go:138] copyHostCerts
	I0108 23:30:23.215060  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:30:23.215092  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:30:23.215101  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:30:23.215167  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:30:23.215243  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:30:23.215260  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:30:23.215267  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:30:23.215289  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:30:23.215340  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:30:23.215399  423858 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:30:23.215408  423858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:30:23.215442  423858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:30:23.215550  423858 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.multinode-266395-m03 san=[192.168.39.239 192.168.39.239 localhost 127.0.0.1 minikube multinode-266395-m03]
	I0108 23:30:23.498842  423858 provision.go:172] copyRemoteCerts
	I0108 23:30:23.498907  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:30:23.498932  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:30:23.501372  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.501794  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:30:23.501821  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.502028  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:30:23.502271  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:23.502447  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:30:23.502598  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m03/id_rsa Username:docker}
	I0108 23:30:23.602552  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 23:30:23.602634  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:30:23.625413  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 23:30:23.625498  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 23:30:23.649480  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 23:30:23.649568  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:30:23.673261  423858 provision.go:86] duration metric: configureAuth took 464.613448ms
	I0108 23:30:23.673293  423858 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:30:23.673566  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:30:23.673652  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:30:23.676281  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.676607  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:30:23.676639  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:30:23.676787  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:30:23.676986  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:23.677138  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:30:23.677249  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:30:23.677421  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:30:23.677813  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 23:30:23.677837  423858 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:31:54.384915  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:31:54.384963  423858 machine.go:91] provisioned docker machine in 1m31.460704974s
	I0108 23:31:54.384980  423858 start.go:300] post-start starting for "multinode-266395-m03" (driver="kvm2")
	I0108 23:31:54.385026  423858 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:31:54.385058  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:31:54.385566  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:31:54.385609  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:31:54.388664  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.389115  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:31:54.389141  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.389318  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:31:54.389579  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:31:54.389766  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:31:54.389946  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m03/id_rsa Username:docker}
	I0108 23:31:54.485344  423858 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:31:54.490091  423858 command_runner.go:130] > NAME=Buildroot
	I0108 23:31:54.490113  423858 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 23:31:54.490117  423858 command_runner.go:130] > ID=buildroot
	I0108 23:31:54.490123  423858 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 23:31:54.490127  423858 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 23:31:54.490240  423858 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:31:54.490291  423858 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:31:54.490382  423858 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:31:54.490514  423858 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:31:54.490527  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /etc/ssl/certs/4070942.pem
	I0108 23:31:54.490633  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:31:54.499063  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:31:54.522132  423858 start.go:303] post-start completed in 137.105677ms
	I0108 23:31:54.522158  423858 fix.go:56] fixHost completed within 1m31.619745744s
	I0108 23:31:54.522180  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:31:54.524957  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.525324  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:31:54.525353  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.525558  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:31:54.525778  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:31:54.525966  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:31:54.526092  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:31:54.526214  423858 main.go:141] libmachine: Using SSH client type: native
	I0108 23:31:54.526578  423858 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0108 23:31:54.526591  423858 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:31:54.656491  423858 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704756714.648313578
	
	I0108 23:31:54.656520  423858 fix.go:206] guest clock: 1704756714.648313578
	I0108 23:31:54.656529  423858 fix.go:219] Guest: 2024-01-08 23:31:54.648313578 +0000 UTC Remote: 2024-01-08 23:31:54.522162935 +0000 UTC m=+555.369251931 (delta=126.150643ms)
	I0108 23:31:54.656547  423858 fix.go:190] guest clock delta is within tolerance: 126.150643ms
	I0108 23:31:54.656552  423858 start.go:83] releasing machines lock for "multinode-266395-m03", held for 1m31.754149678s
	I0108 23:31:54.656572  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:31:54.656866  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetIP
	I0108 23:31:54.659428  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.659934  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:31:54.659968  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.662168  423858 out.go:177] * Found network options:
	I0108 23:31:54.663887  423858 out.go:177]   - NO_PROXY=192.168.39.18,192.168.39.214
	W0108 23:31:54.665365  423858 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 23:31:54.665387  423858 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:31:54.665403  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:31:54.666046  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:31:54.666258  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .DriverName
	I0108 23:31:54.666390  423858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:31:54.666436  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	W0108 23:31:54.666449  423858 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 23:31:54.666475  423858 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 23:31:54.666550  423858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:31:54.666575  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHHostname
	I0108 23:31:54.669187  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.669463  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.669578  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:31:54.669616  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.669781  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:31:54.669872  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:31:54.669916  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:54.669946  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:31:54.670150  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:31:54.670167  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHPort
	I0108 23:31:54.670340  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m03/id_rsa Username:docker}
	I0108 23:31:54.670348  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHKeyPath
	I0108 23:31:54.670499  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetSSHUsername
	I0108 23:31:54.670622  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m03/id_rsa Username:docker}
	I0108 23:31:54.790643  423858 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 23:31:54.915236  423858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:31:54.921475  423858 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 23:31:54.921540  423858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:31:54.921618  423858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:31:54.931238  423858 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 23:31:54.931262  423858 start.go:475] detecting cgroup driver to use...
	I0108 23:31:54.931328  423858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:31:54.947054  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:31:54.962628  423858 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:31:54.962686  423858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:31:54.979002  423858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:31:54.993253  423858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:31:55.150399  423858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:31:55.285787  423858 docker.go:219] disabling docker service ...
	I0108 23:31:55.285862  423858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:31:55.300765  423858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:31:55.312641  423858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:31:55.435773  423858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:31:55.629868  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:31:55.648404  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:31:55.666607  423858 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 23:31:55.667055  423858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 23:31:55.667113  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:31:55.677593  423858 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:31:55.677669  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:31:55.692456  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:31:55.702563  423858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:31:55.712492  423858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:31:55.722905  423858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:31:55.731942  423858 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 23:31:55.732107  423858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:31:55.741034  423858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:31:55.882211  423858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:31:58.618439  423858 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.736187662s)
	I0108 23:31:58.618475  423858 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:31:58.618527  423858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:31:58.624250  423858 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 23:31:58.624291  423858 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 23:31:58.624303  423858 command_runner.go:130] > Device: 16h/22d	Inode: 1249        Links: 1
	I0108 23:31:58.624310  423858 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:31:58.624315  423858 command_runner.go:130] > Access: 2024-01-08 23:31:58.513317784 +0000
	I0108 23:31:58.624321  423858 command_runner.go:130] > Modify: 2024-01-08 23:31:58.513317784 +0000
	I0108 23:31:58.624326  423858 command_runner.go:130] > Change: 2024-01-08 23:31:58.513317784 +0000
	I0108 23:31:58.624334  423858 command_runner.go:130] >  Birth: -
	I0108 23:31:58.624363  423858 start.go:543] Will wait 60s for crictl version
	I0108 23:31:58.624412  423858 ssh_runner.go:195] Run: which crictl
	I0108 23:31:58.629093  423858 command_runner.go:130] > /usr/bin/crictl
	I0108 23:31:58.629168  423858 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:31:58.675354  423858 command_runner.go:130] > Version:  0.1.0
	I0108 23:31:58.675397  423858 command_runner.go:130] > RuntimeName:  cri-o
	I0108 23:31:58.675404  423858 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 23:31:58.675412  423858 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 23:31:58.675493  423858 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:31:58.675576  423858 ssh_runner.go:195] Run: crio --version
	I0108 23:31:58.726167  423858 command_runner.go:130] > crio version 1.24.1
	I0108 23:31:58.726194  423858 command_runner.go:130] > Version:          1.24.1
	I0108 23:31:58.726201  423858 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:31:58.726206  423858 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:31:58.726213  423858 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:31:58.726218  423858 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:31:58.726222  423858 command_runner.go:130] > Compiler:         gc
	I0108 23:31:58.726243  423858 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:31:58.726252  423858 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:31:58.726263  423858 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:31:58.726276  423858 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:31:58.726286  423858 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:31:58.726375  423858 ssh_runner.go:195] Run: crio --version
	I0108 23:31:58.770143  423858 command_runner.go:130] > crio version 1.24.1
	I0108 23:31:58.770178  423858 command_runner.go:130] > Version:          1.24.1
	I0108 23:31:58.770187  423858 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 23:31:58.770192  423858 command_runner.go:130] > GitTreeState:     dirty
	I0108 23:31:58.770198  423858 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 23:31:58.770203  423858 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 23:31:58.770207  423858 command_runner.go:130] > Compiler:         gc
	I0108 23:31:58.770211  423858 command_runner.go:130] > Platform:         linux/amd64
	I0108 23:31:58.770217  423858 command_runner.go:130] > Linkmode:         dynamic
	I0108 23:31:58.770224  423858 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 23:31:58.770228  423858 command_runner.go:130] > SeccompEnabled:   true
	I0108 23:31:58.770232  423858 command_runner.go:130] > AppArmorEnabled:  false
	I0108 23:31:58.772463  423858 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 23:31:58.774263  423858 out.go:177]   - env NO_PROXY=192.168.39.18
	I0108 23:31:58.775883  423858 out.go:177]   - env NO_PROXY=192.168.39.18,192.168.39.214
	I0108 23:31:58.777336  423858 main.go:141] libmachine: (multinode-266395-m03) Calling .GetIP
	I0108 23:31:58.779984  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:58.780342  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:6a:73", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:20:20 +0000 UTC Type:0 Mac:52:54:00:db:6a:73 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-266395-m03 Clientid:01:52:54:00:db:6a:73}
	I0108 23:31:58.780380  423858 main.go:141] libmachine: (multinode-266395-m03) DBG | domain multinode-266395-m03 has defined IP address 192.168.39.239 and MAC address 52:54:00:db:6a:73 in network mk-multinode-266395
	I0108 23:31:58.780650  423858 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:31:58.785308  423858 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0108 23:31:58.785400  423858 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395 for IP: 192.168.39.239
	I0108 23:31:58.785439  423858 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:31:58.785595  423858 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:31:58.785646  423858 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:31:58.785658  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 23:31:58.785675  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 23:31:58.785690  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 23:31:58.785720  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 23:31:58.785796  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:31:58.785839  423858 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:31:58.785854  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:31:58.785883  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:31:58.785906  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:31:58.785972  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:31:58.786008  423858 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:31:58.786040  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> /usr/share/ca-certificates/4070942.pem
	I0108 23:31:58.786053  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:31:58.786065  423858 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem -> /usr/share/ca-certificates/407094.pem
	I0108 23:31:58.786612  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:31:58.810480  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:31:58.832483  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:31:58.854572  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:31:58.878744  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:31:58.901933  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:31:58.924444  423858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:31:58.947884  423858 ssh_runner.go:195] Run: openssl version
	I0108 23:31:58.954081  423858 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 23:31:58.954150  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:31:58.965655  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:31:58.970536  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:31:58.970698  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:31:58.970753  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:31:58.976466  423858 command_runner.go:130] > 3ec20f2e
	I0108 23:31:58.976536  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:31:58.988155  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:31:59.000396  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:31:59.005116  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:31:59.005346  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:31:59.005396  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:31:59.010999  423858 command_runner.go:130] > b5213941
	I0108 23:31:59.011068  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:31:59.020153  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:31:59.030784  423858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:31:59.035452  423858 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:31:59.035483  423858 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:31:59.035531  423858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:31:59.040916  423858 command_runner.go:130] > 51391683
	I0108 23:31:59.041180  423858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:31:59.049729  423858 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:31:59.054000  423858 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:31:59.054047  423858 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 23:31:59.054140  423858 ssh_runner.go:195] Run: crio config
	I0108 23:31:59.106666  423858 command_runner.go:130] ! time="2024-01-08 23:31:59.098532768Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 23:31:59.106700  423858 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 23:31:59.118247  423858 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 23:31:59.118274  423858 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 23:31:59.118281  423858 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 23:31:59.118284  423858 command_runner.go:130] > #
	I0108 23:31:59.118292  423858 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 23:31:59.118299  423858 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 23:31:59.118304  423858 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 23:31:59.118311  423858 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 23:31:59.118315  423858 command_runner.go:130] > # reload'.
	I0108 23:31:59.118320  423858 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 23:31:59.118327  423858 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 23:31:59.118340  423858 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 23:31:59.118352  423858 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 23:31:59.118361  423858 command_runner.go:130] > [crio]
	I0108 23:31:59.118370  423858 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 23:31:59.118380  423858 command_runner.go:130] > # containers images, in this directory.
	I0108 23:31:59.118389  423858 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 23:31:59.118402  423858 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 23:31:59.118411  423858 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 23:31:59.118419  423858 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 23:31:59.118428  423858 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 23:31:59.118439  423858 command_runner.go:130] > storage_driver = "overlay"
	I0108 23:31:59.118451  423858 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 23:31:59.118466  423858 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 23:31:59.118477  423858 command_runner.go:130] > storage_option = [
	I0108 23:31:59.118489  423858 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 23:31:59.118495  423858 command_runner.go:130] > ]
	I0108 23:31:59.118502  423858 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 23:31:59.118510  423858 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 23:31:59.118517  423858 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 23:31:59.118528  423858 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 23:31:59.118537  423858 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 23:31:59.118547  423858 command_runner.go:130] > # always happen on a node reboot
	I0108 23:31:59.118553  423858 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 23:31:59.118565  423858 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 23:31:59.118585  423858 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 23:31:59.118600  423858 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 23:31:59.118611  423858 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 23:31:59.118625  423858 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 23:31:59.118641  423858 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 23:31:59.118650  423858 command_runner.go:130] > # internal_wipe = true
	I0108 23:31:59.118659  423858 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 23:31:59.118675  423858 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 23:31:59.118686  423858 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 23:31:59.118697  423858 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 23:31:59.118708  423858 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 23:31:59.118717  423858 command_runner.go:130] > [crio.api]
	I0108 23:31:59.118725  423858 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 23:31:59.118736  423858 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 23:31:59.118749  423858 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 23:31:59.118757  423858 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 23:31:59.118767  423858 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 23:31:59.118776  423858 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 23:31:59.118783  423858 command_runner.go:130] > # stream_port = "0"
	I0108 23:31:59.118796  423858 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 23:31:59.118803  423858 command_runner.go:130] > # stream_enable_tls = false
	I0108 23:31:59.118815  423858 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 23:31:59.118824  423858 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 23:31:59.118836  423858 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 23:31:59.118849  423858 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 23:31:59.118858  423858 command_runner.go:130] > # minutes.
	I0108 23:31:59.118868  423858 command_runner.go:130] > # stream_tls_cert = ""
	I0108 23:31:59.118890  423858 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 23:31:59.118899  423858 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 23:31:59.118907  423858 command_runner.go:130] > # stream_tls_key = ""
	I0108 23:31:59.118915  423858 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 23:31:59.118924  423858 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 23:31:59.118930  423858 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 23:31:59.118936  423858 command_runner.go:130] > # stream_tls_ca = ""
	I0108 23:31:59.118944  423858 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:31:59.118952  423858 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 23:31:59.118959  423858 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 23:31:59.118966  423858 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 23:31:59.118981  423858 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 23:31:59.118989  423858 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 23:31:59.118993  423858 command_runner.go:130] > [crio.runtime]
	I0108 23:31:59.119001  423858 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 23:31:59.119009  423858 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 23:31:59.119013  423858 command_runner.go:130] > # "nofile=1024:2048"
	I0108 23:31:59.119021  423858 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 23:31:59.119027  423858 command_runner.go:130] > # default_ulimits = [
	I0108 23:31:59.119031  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119040  423858 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 23:31:59.119046  423858 command_runner.go:130] > # no_pivot = false
	I0108 23:31:59.119052  423858 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 23:31:59.119060  423858 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 23:31:59.119065  423858 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 23:31:59.119071  423858 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 23:31:59.119077  423858 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 23:31:59.119085  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:31:59.119092  423858 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 23:31:59.119096  423858 command_runner.go:130] > # Cgroup setting for conmon
	I0108 23:31:59.119105  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 23:31:59.119109  423858 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 23:31:59.119115  423858 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 23:31:59.119122  423858 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 23:31:59.119129  423858 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 23:31:59.119135  423858 command_runner.go:130] > conmon_env = [
	I0108 23:31:59.119141  423858 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 23:31:59.119147  423858 command_runner.go:130] > ]
	I0108 23:31:59.119152  423858 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 23:31:59.119161  423858 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 23:31:59.119170  423858 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 23:31:59.119174  423858 command_runner.go:130] > # default_env = [
	I0108 23:31:59.119178  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119183  423858 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 23:31:59.119190  423858 command_runner.go:130] > # selinux = false
	I0108 23:31:59.119197  423858 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 23:31:59.119205  423858 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 23:31:59.119212  423858 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 23:31:59.119218  423858 command_runner.go:130] > # seccomp_profile = ""
	I0108 23:31:59.119225  423858 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 23:31:59.119232  423858 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 23:31:59.119238  423858 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 23:31:59.119245  423858 command_runner.go:130] > # which might increase security.
	I0108 23:31:59.119250  423858 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 23:31:59.119258  423858 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 23:31:59.119266  423858 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 23:31:59.119274  423858 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 23:31:59.119281  423858 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 23:31:59.119288  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:31:59.119293  423858 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 23:31:59.119301  423858 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 23:31:59.119305  423858 command_runner.go:130] > # the cgroup blockio controller.
	I0108 23:31:59.119311  423858 command_runner.go:130] > # blockio_config_file = ""
	I0108 23:31:59.119317  423858 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 23:31:59.119323  423858 command_runner.go:130] > # irqbalance daemon.
	I0108 23:31:59.119329  423858 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 23:31:59.119337  423858 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 23:31:59.119342  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:31:59.119346  423858 command_runner.go:130] > # rdt_config_file = ""
	I0108 23:31:59.119351  423858 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 23:31:59.119355  423858 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 23:31:59.119385  423858 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 23:31:59.119395  423858 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 23:31:59.119405  423858 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 23:31:59.119416  423858 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 23:31:59.119423  423858 command_runner.go:130] > # will be added.
	I0108 23:31:59.119428  423858 command_runner.go:130] > # default_capabilities = [
	I0108 23:31:59.119432  423858 command_runner.go:130] > # 	"CHOWN",
	I0108 23:31:59.119438  423858 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 23:31:59.119443  423858 command_runner.go:130] > # 	"FSETID",
	I0108 23:31:59.119449  423858 command_runner.go:130] > # 	"FOWNER",
	I0108 23:31:59.119453  423858 command_runner.go:130] > # 	"SETGID",
	I0108 23:31:59.119459  423858 command_runner.go:130] > # 	"SETUID",
	I0108 23:31:59.119463  423858 command_runner.go:130] > # 	"SETPCAP",
	I0108 23:31:59.119469  423858 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 23:31:59.119473  423858 command_runner.go:130] > # 	"KILL",
	I0108 23:31:59.119479  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119485  423858 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 23:31:59.119493  423858 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:31:59.119499  423858 command_runner.go:130] > # default_sysctls = [
	I0108 23:31:59.119503  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119510  423858 command_runner.go:130] > # List of devices on the host that a
	I0108 23:31:59.119521  423858 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 23:31:59.119528  423858 command_runner.go:130] > # allowed_devices = [
	I0108 23:31:59.119532  423858 command_runner.go:130] > # 	"/dev/fuse",
	I0108 23:31:59.119538  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119543  423858 command_runner.go:130] > # List of additional devices. specified as
	I0108 23:31:59.119552  423858 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 23:31:59.119560  423858 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 23:31:59.119582  423858 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 23:31:59.119589  423858 command_runner.go:130] > # additional_devices = [
	I0108 23:31:59.119593  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119600  423858 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 23:31:59.119604  423858 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 23:31:59.119612  423858 command_runner.go:130] > # 	"/etc/cdi",
	I0108 23:31:59.119619  423858 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 23:31:59.119628  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119640  423858 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 23:31:59.119653  423858 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 23:31:59.119662  423858 command_runner.go:130] > # Defaults to false.
	I0108 23:31:59.119673  423858 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 23:31:59.119685  423858 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 23:31:59.119697  423858 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 23:31:59.119706  423858 command_runner.go:130] > # hooks_dir = [
	I0108 23:31:59.119714  423858 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 23:31:59.119721  423858 command_runner.go:130] > # ]
	I0108 23:31:59.119727  423858 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 23:31:59.119736  423858 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 23:31:59.119742  423858 command_runner.go:130] > # its default mounts from the following two files:
	I0108 23:31:59.119747  423858 command_runner.go:130] > #
	I0108 23:31:59.119754  423858 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 23:31:59.119763  423858 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 23:31:59.119771  423858 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 23:31:59.119774  423858 command_runner.go:130] > #
	I0108 23:31:59.119783  423858 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 23:31:59.119789  423858 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 23:31:59.119798  423858 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 23:31:59.119804  423858 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 23:31:59.119810  423858 command_runner.go:130] > #
	I0108 23:31:59.119814  423858 command_runner.go:130] > # default_mounts_file = ""
	I0108 23:31:59.119823  423858 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 23:31:59.119830  423858 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 23:31:59.119836  423858 command_runner.go:130] > pids_limit = 1024
	I0108 23:31:59.119845  423858 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 23:31:59.119853  423858 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 23:31:59.119861  423858 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 23:31:59.119875  423858 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 23:31:59.119881  423858 command_runner.go:130] > # log_size_max = -1
	I0108 23:31:59.119888  423858 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 23:31:59.119895  423858 command_runner.go:130] > # log_to_journald = false
	I0108 23:31:59.119901  423858 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 23:31:59.119908  423858 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 23:31:59.119914  423858 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 23:31:59.119921  423858 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 23:31:59.119926  423858 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 23:31:59.119932  423858 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 23:31:59.119938  423858 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 23:31:59.119944  423858 command_runner.go:130] > # read_only = false
	I0108 23:31:59.119951  423858 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 23:31:59.119959  423858 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 23:31:59.119966  423858 command_runner.go:130] > # live configuration reload.
	I0108 23:31:59.119970  423858 command_runner.go:130] > # log_level = "info"
	I0108 23:31:59.119978  423858 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 23:31:59.119982  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:31:59.119989  423858 command_runner.go:130] > # log_filter = ""
	I0108 23:31:59.119995  423858 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 23:31:59.120010  423858 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 23:31:59.120014  423858 command_runner.go:130] > # separated by comma.
	I0108 23:31:59.120018  423858 command_runner.go:130] > # uid_mappings = ""
	I0108 23:31:59.120024  423858 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 23:31:59.120030  423858 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 23:31:59.120034  423858 command_runner.go:130] > # separated by comma.
	I0108 23:31:59.120038  423858 command_runner.go:130] > # gid_mappings = ""
	I0108 23:31:59.120044  423858 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 23:31:59.120050  423858 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:31:59.120056  423858 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:31:59.120063  423858 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 23:31:59.120069  423858 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 23:31:59.120077  423858 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 23:31:59.120083  423858 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 23:31:59.120091  423858 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 23:31:59.120097  423858 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 23:31:59.120105  423858 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 23:31:59.120111  423858 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 23:31:59.120117  423858 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 23:31:59.120123  423858 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 23:31:59.120130  423858 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 23:31:59.120135  423858 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 23:31:59.120142  423858 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 23:31:59.120148  423858 command_runner.go:130] > drop_infra_ctr = false
	I0108 23:31:59.120156  423858 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 23:31:59.120164  423858 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 23:31:59.120173  423858 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 23:31:59.120181  423858 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 23:31:59.120191  423858 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 23:31:59.120198  423858 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 23:31:59.120204  423858 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 23:31:59.120213  423858 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 23:31:59.120218  423858 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 23:31:59.120228  423858 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 23:31:59.120236  423858 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 23:31:59.120244  423858 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 23:31:59.120252  423858 command_runner.go:130] > # default_runtime = "runc"
	I0108 23:31:59.120258  423858 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 23:31:59.120268  423858 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 23:31:59.120280  423858 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 23:31:59.120288  423858 command_runner.go:130] > # creation as a file is not desired either.
	I0108 23:31:59.120298  423858 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 23:31:59.120303  423858 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 23:31:59.120321  423858 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 23:31:59.120327  423858 command_runner.go:130] > # ]
	I0108 23:31:59.120334  423858 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 23:31:59.120342  423858 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 23:31:59.120351  423858 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 23:31:59.120359  423858 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 23:31:59.120363  423858 command_runner.go:130] > #
	I0108 23:31:59.120368  423858 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 23:31:59.120376  423858 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 23:31:59.120382  423858 command_runner.go:130] > #  runtime_type = "oci"
	I0108 23:31:59.120389  423858 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 23:31:59.120394  423858 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 23:31:59.120401  423858 command_runner.go:130] > #  allowed_annotations = []
	I0108 23:31:59.120405  423858 command_runner.go:130] > # Where:
	I0108 23:31:59.120411  423858 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 23:31:59.120419  423858 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 23:31:59.120427  423858 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 23:31:59.120433  423858 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 23:31:59.120440  423858 command_runner.go:130] > #   in $PATH.
	I0108 23:31:59.120446  423858 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 23:31:59.120454  423858 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 23:31:59.120460  423858 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 23:31:59.120467  423858 command_runner.go:130] > #   state.
	I0108 23:31:59.120473  423858 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 23:31:59.120481  423858 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 23:31:59.120489  423858 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 23:31:59.120497  423858 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 23:31:59.120503  423858 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 23:31:59.120512  423858 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 23:31:59.120519  423858 command_runner.go:130] > #   The currently recognized values are:
	I0108 23:31:59.120525  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 23:31:59.120534  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 23:31:59.120542  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 23:31:59.120550  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 23:31:59.120557  423858 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 23:31:59.120566  423858 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 23:31:59.120574  423858 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 23:31:59.120583  423858 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 23:31:59.120588  423858 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 23:31:59.120595  423858 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 23:31:59.120599  423858 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 23:31:59.120605  423858 command_runner.go:130] > runtime_type = "oci"
	I0108 23:31:59.120610  423858 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 23:31:59.120619  423858 command_runner.go:130] > runtime_config_path = ""
	I0108 23:31:59.120628  423858 command_runner.go:130] > monitor_path = ""
	I0108 23:31:59.120639  423858 command_runner.go:130] > monitor_cgroup = ""
	I0108 23:31:59.120648  423858 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 23:31:59.120658  423858 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 23:31:59.120667  423858 command_runner.go:130] > # running containers
	I0108 23:31:59.120677  423858 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 23:31:59.120690  423858 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 23:31:59.120723  423858 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 23:31:59.120737  423858 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 23:31:59.120745  423858 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 23:31:59.120750  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 23:31:59.120755  423858 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 23:31:59.120760  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 23:31:59.120765  423858 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 23:31:59.120769  423858 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 23:31:59.120778  423858 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 23:31:59.120786  423858 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 23:31:59.120792  423858 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 23:31:59.120802  423858 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 23:31:59.120812  423858 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 23:31:59.120819  423858 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 23:31:59.120829  423858 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 23:31:59.120839  423858 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 23:31:59.120847  423858 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 23:31:59.120854  423858 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 23:31:59.120861  423858 command_runner.go:130] > # Example:
	I0108 23:31:59.120866  423858 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 23:31:59.120879  423858 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 23:31:59.120886  423858 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 23:31:59.120892  423858 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 23:31:59.120899  423858 command_runner.go:130] > # cpuset = 0
	I0108 23:31:59.120903  423858 command_runner.go:130] > # cpushares = "0-1"
	I0108 23:31:59.120909  423858 command_runner.go:130] > # Where:
	I0108 23:31:59.120915  423858 command_runner.go:130] > # The workload name is workload-type.
	I0108 23:31:59.120924  423858 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 23:31:59.120933  423858 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 23:31:59.120939  423858 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 23:31:59.120949  423858 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 23:31:59.120957  423858 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 23:31:59.120961  423858 command_runner.go:130] > # 
	I0108 23:31:59.120969  423858 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 23:31:59.120975  423858 command_runner.go:130] > #
	I0108 23:31:59.120980  423858 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 23:31:59.120990  423858 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 23:31:59.120999  423858 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 23:31:59.121007  423858 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 23:31:59.121013  423858 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 23:31:59.121020  423858 command_runner.go:130] > [crio.image]
	I0108 23:31:59.121032  423858 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 23:31:59.121040  423858 command_runner.go:130] > # default_transport = "docker://"
	I0108 23:31:59.121046  423858 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 23:31:59.121055  423858 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:31:59.121061  423858 command_runner.go:130] > # global_auth_file = ""
	I0108 23:31:59.121071  423858 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 23:31:59.121078  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:31:59.121083  423858 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 23:31:59.121093  423858 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 23:31:59.121102  423858 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 23:31:59.121108  423858 command_runner.go:130] > # This option supports live configuration reload.
	I0108 23:31:59.121113  423858 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 23:31:59.121121  423858 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 23:31:59.121128  423858 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 23:31:59.121136  423858 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 23:31:59.121145  423858 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 23:31:59.121150  423858 command_runner.go:130] > # pause_command = "/pause"
	I0108 23:31:59.121158  423858 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 23:31:59.121166  423858 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 23:31:59.121172  423858 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 23:31:59.121181  423858 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 23:31:59.121188  423858 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 23:31:59.121193  423858 command_runner.go:130] > # signature_policy = ""
	I0108 23:31:59.121201  423858 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 23:31:59.121207  423858 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 23:31:59.121213  423858 command_runner.go:130] > # changing them here.
	I0108 23:31:59.121218  423858 command_runner.go:130] > # insecure_registries = [
	I0108 23:31:59.121224  423858 command_runner.go:130] > # ]
	I0108 23:31:59.121232  423858 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 23:31:59.121239  423858 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 23:31:59.121243  423858 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 23:31:59.121251  423858 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 23:31:59.121255  423858 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 23:31:59.121263  423858 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 23:31:59.121269  423858 command_runner.go:130] > # CNI plugins.
	I0108 23:31:59.121273  423858 command_runner.go:130] > [crio.network]
	I0108 23:31:59.121280  423858 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 23:31:59.121288  423858 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 23:31:59.121293  423858 command_runner.go:130] > # cni_default_network = ""
	I0108 23:31:59.121301  423858 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 23:31:59.121307  423858 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 23:31:59.121313  423858 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 23:31:59.121319  423858 command_runner.go:130] > # plugin_dirs = [
	I0108 23:31:59.121323  423858 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 23:31:59.121329  423858 command_runner.go:130] > # ]
	I0108 23:31:59.121334  423858 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 23:31:59.121338  423858 command_runner.go:130] > [crio.metrics]
	I0108 23:31:59.121344  423858 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 23:31:59.121350  423858 command_runner.go:130] > enable_metrics = true
	I0108 23:31:59.121355  423858 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 23:31:59.121361  423858 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 23:31:59.121368  423858 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 23:31:59.121378  423858 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 23:31:59.121386  423858 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 23:31:59.121390  423858 command_runner.go:130] > # metrics_collectors = [
	I0108 23:31:59.121396  423858 command_runner.go:130] > # 	"operations",
	I0108 23:31:59.121402  423858 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 23:31:59.121409  423858 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 23:31:59.121413  423858 command_runner.go:130] > # 	"operations_errors",
	I0108 23:31:59.121419  423858 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 23:31:59.121424  423858 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 23:31:59.121431  423858 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 23:31:59.121435  423858 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 23:31:59.121439  423858 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 23:31:59.121444  423858 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 23:31:59.121448  423858 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 23:31:59.121453  423858 command_runner.go:130] > # 	"containers_oom_total",
	I0108 23:31:59.121457  423858 command_runner.go:130] > # 	"containers_oom",
	I0108 23:31:59.121461  423858 command_runner.go:130] > # 	"processes_defunct",
	I0108 23:31:59.121468  423858 command_runner.go:130] > # 	"operations_total",
	I0108 23:31:59.121473  423858 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 23:31:59.121479  423858 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 23:31:59.121485  423858 command_runner.go:130] > # 	"operations_errors_total",
	I0108 23:31:59.121491  423858 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 23:31:59.121496  423858 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 23:31:59.121502  423858 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 23:31:59.121506  423858 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 23:31:59.121514  423858 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 23:31:59.121518  423858 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 23:31:59.121522  423858 command_runner.go:130] > # ]
	I0108 23:31:59.121529  423858 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 23:31:59.121534  423858 command_runner.go:130] > # metrics_port = 9090
	I0108 23:31:59.121541  423858 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 23:31:59.121545  423858 command_runner.go:130] > # metrics_socket = ""
	I0108 23:31:59.121553  423858 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 23:31:59.121559  423858 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 23:31:59.121567  423858 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 23:31:59.121574  423858 command_runner.go:130] > # certificate on any modification event.
	I0108 23:31:59.121578  423858 command_runner.go:130] > # metrics_cert = ""
	I0108 23:31:59.121586  423858 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 23:31:59.121591  423858 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 23:31:59.121597  423858 command_runner.go:130] > # metrics_key = ""
	I0108 23:31:59.121602  423858 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 23:31:59.121606  423858 command_runner.go:130] > [crio.tracing]
	I0108 23:31:59.121616  423858 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 23:31:59.121626  423858 command_runner.go:130] > # enable_tracing = false
	I0108 23:31:59.121639  423858 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 23:31:59.121650  423858 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 23:31:59.121660  423858 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 23:31:59.121671  423858 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 23:31:59.121683  423858 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 23:31:59.121692  423858 command_runner.go:130] > [crio.stats]
	I0108 23:31:59.121701  423858 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 23:31:59.121713  423858 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 23:31:59.121722  423858 command_runner.go:130] > # stats_collection_period = 0
	I0108 23:31:59.121803  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:31:59.121814  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:31:59.121826  423858 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:31:59.121849  423858 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-266395 NodeName:multinode-266395-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:31:59.122033  423858 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-266395-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:31:59.122091  423858 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-266395-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:31:59.122146  423858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 23:31:59.131902  423858 command_runner.go:130] > kubeadm
	I0108 23:31:59.131930  423858 command_runner.go:130] > kubectl
	I0108 23:31:59.131937  423858 command_runner.go:130] > kubelet
	I0108 23:31:59.132115  423858 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:31:59.132174  423858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 23:31:59.141276  423858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 23:31:59.157282  423858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:31:59.173318  423858 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0108 23:31:59.177083  423858 command_runner.go:130] > 192.168.39.18	control-plane.minikube.internal
	I0108 23:31:59.177160  423858 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:31:59.177465  423858 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:31:59.177507  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:31:59.177547  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:31:59.192548  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0108 23:31:59.193035  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:31:59.193533  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:31:59.193553  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:31:59.193879  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:31:59.194120  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:31:59.194306  423858 start.go:304] JoinCluster: &{Name:multinode-266395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-266395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:31:59.194468  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 23:31:59.194497  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:31:59.197521  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:31:59.197898  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:31:59.197927  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:31:59.198104  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:31:59.198284  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:31:59.198472  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:31:59.198620  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:31:59.379824  423858 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token la78mx.unmk92hf2arofrxl --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0108 23:31:59.381580  423858 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 23:31:59.381644  423858 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:31:59.382067  423858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:31:59.382125  423858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:31:59.398254  423858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0108 23:31:59.398660  423858 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:31:59.399151  423858 main.go:141] libmachine: Using API Version  1
	I0108 23:31:59.399176  423858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:31:59.399525  423858 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:31:59.399728  423858 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:31:59.399987  423858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-266395-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 23:31:59.400015  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:31:59.402659  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:31:59.403032  423858 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:27:49 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:31:59.403052  423858 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:31:59.403184  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:31:59.403391  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:31:59.403555  423858 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:31:59.403724  423858 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:31:59.611329  423858 command_runner.go:130] > node/multinode-266395-m03 cordoned
	I0108 23:32:02.648878  423858 command_runner.go:130] > pod "busybox-5bc68d56bd-wcrzw" has DeletionTimestamp older than 1 seconds, skipping
	I0108 23:32:02.648918  423858 command_runner.go:130] > node/multinode-266395-m03 drained
	I0108 23:32:02.650801  423858 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 23:32:02.650823  423858 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-brbnm, kube-system/kube-proxy-vbq4b
	I0108 23:32:02.650857  423858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-266395-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.250834835s)
	I0108 23:32:02.650879  423858 node.go:108] successfully drained node "m03"
	I0108 23:32:02.651313  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:32:02.651609  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:32:02.652015  423858 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 23:32:02.652082  423858 round_trippers.go:463] DELETE https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:32:02.652094  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:02.652103  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:02.652112  423858 round_trippers.go:473]     Content-Type: application/json
	I0108 23:32:02.652125  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:02.665220  423858 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0108 23:32:02.665246  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:02.665256  423858 round_trippers.go:580]     Audit-Id: 2fdf107d-c101-4e2e-a8c8-018f39d20d26
	I0108 23:32:02.665264  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:02.665273  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:02.665281  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:02.665289  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:02.665297  423858 round_trippers.go:580]     Content-Length: 171
	I0108 23:32:02.665305  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:02 GMT
	I0108 23:32:02.665693  423858 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-266395-m03","kind":"nodes","uid":"9520eb58-7ccf-441c-a72a-288c0fd8fc84"}}
	I0108 23:32:02.665742  423858 node.go:124] successfully deleted node "m03"
	I0108 23:32:02.665752  423858 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 23:32:02.665781  423858 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 23:32:02.665805  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token la78mx.unmk92hf2arofrxl --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-266395-m03"
	I0108 23:32:02.722975  423858 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 23:32:02.880505  423858 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 23:32:02.880539  423858 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 23:32:02.934697  423858 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 23:32:02.935057  423858 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 23:32:02.935077  423858 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 23:32:03.088415  423858 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 23:32:03.623636  423858 command_runner.go:130] > This node has joined the cluster:
	I0108 23:32:03.623666  423858 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 23:32:03.623677  423858 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 23:32:03.623687  423858 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 23:32:03.626353  423858 command_runner.go:130] ! W0108 23:32:02.714629    2350 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 23:32:03.626416  423858 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 23:32:03.626431  423858 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 23:32:03.626445  423858 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 23:32:03.626481  423858 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 23:32:03.919908  423858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-266395 minikube.k8s.io/updated_at=2024_01_08T23_32_03_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 23:32:04.027330  423858 command_runner.go:130] > node/multinode-266395-m02 labeled
	I0108 23:32:04.042104  423858 command_runner.go:130] > node/multinode-266395-m03 labeled
	I0108 23:32:04.044298  423858 start.go:306] JoinCluster complete in 4.849991029s
	I0108 23:32:04.044335  423858 cni.go:84] Creating CNI manager for ""
	I0108 23:32:04.044345  423858 cni.go:136] 3 nodes found, recommending kindnet
	I0108 23:32:04.044415  423858 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 23:32:04.050638  423858 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 23:32:04.050673  423858 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 23:32:04.050687  423858 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 23:32:04.050697  423858 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 23:32:04.050706  423858 command_runner.go:130] > Access: 2024-01-08 23:27:50.050727036 +0000
	I0108 23:32:04.050714  423858 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 23:32:04.050729  423858 command_runner.go:130] > Change: 2024-01-08 23:27:48.185727036 +0000
	I0108 23:32:04.050735  423858 command_runner.go:130] >  Birth: -
	I0108 23:32:04.050955  423858 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 23:32:04.050977  423858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 23:32:04.071177  423858 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 23:32:04.497534  423858 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:32:04.497564  423858 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 23:32:04.497570  423858 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 23:32:04.497575  423858 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 23:32:04.498288  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:32:04.498643  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:32:04.499096  423858 round_trippers.go:463] GET https://192.168.39.18:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 23:32:04.499116  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.499128  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.499138  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.502400  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:32:04.502416  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.502423  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.502429  423858 round_trippers.go:580]     Audit-Id: 547530bc-d68b-4e97-a1e1-743f55d8826f
	I0108 23:32:04.502439  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.502448  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.502460  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.502473  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.502481  423858 round_trippers.go:580]     Content-Length: 291
	I0108 23:32:04.502503  423858 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3b98c5e8-c250-43d2-8c59-f9ae5ee3078d","resourceVersion":"884","creationTimestamp":"2024-01-08T23:17:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 23:32:04.502599  423858 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-266395" context rescaled to 1 replicas
	I0108 23:32:04.502664  423858 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.239 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 23:32:04.504392  423858 out.go:177] * Verifying Kubernetes components...
	I0108 23:32:04.506235  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:32:04.528640  423858 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:32:04.529008  423858 kapi.go:59] client config for multinode-266395: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/multinode-266395/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:32:04.529520  423858 node_ready.go:35] waiting up to 6m0s for node "multinode-266395-m03" to be "Ready" ...
	I0108 23:32:04.529623  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:32:04.529633  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.529641  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.529647  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.532510  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.532531  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.532541  423858 round_trippers.go:580]     Audit-Id: 8f9aa6b5-243f-41c8-9310-9d5bbe8e942a
	I0108 23:32:04.532554  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.532563  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.532571  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.532578  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.532586  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.532934  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m03","uid":"80598f48-816b-4d3a-b9ec-c68a967b82db","resourceVersion":"1208","creationTimestamp":"2024-01-08T23:32:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_32_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:32:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 23:32:04.533283  423858 node_ready.go:49] node "multinode-266395-m03" has status "Ready":"True"
	I0108 23:32:04.533303  423858 node_ready.go:38] duration metric: took 3.749725ms waiting for node "multinode-266395-m03" to be "Ready" ...
	I0108 23:32:04.533318  423858 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:32:04.533390  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods
	I0108 23:32:04.533405  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.533416  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.533424  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.537588  423858 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 23:32:04.537611  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.537620  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.537626  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.537631  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.537636  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.537641  423858 round_trippers.go:580]     Audit-Id: f8176593-3354-4efb-ad77-85026412123f
	I0108 23:32:04.537647  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.540372  423858 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1217"},"items":[{"metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82047 chars]
	I0108 23:32:04.543014  423858 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.543102  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-r8pvw
	I0108 23:32:04.543111  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.543119  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.543127  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.546075  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.546094  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.546102  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.546110  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.546117  423858 round_trippers.go:580]     Audit-Id: 081398bf-86b4-4aee-bdf0-3140ff2d26ba
	I0108 23:32:04.546131  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.546138  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.546146  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.546344  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-r8pvw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5300c187-4f1f-4330-ae19-6bf2855763f2","resourceVersion":"880","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fed297e4-e5eb-4805-9f5b-b8a13d6d49f6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0108 23:32:04.546885  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:04.546908  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.546918  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.546927  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.549661  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.549682  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.549692  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.549700  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.549709  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.549717  423858 round_trippers.go:580]     Audit-Id: ca9adb4f-a354-4534-8c28-0c0e59340794
	I0108 23:32:04.549725  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.549740  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.550119  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:32:04.550566  423858 pod_ready.go:92] pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:04.550589  423858 pod_ready.go:81] duration metric: took 7.552552ms waiting for pod "coredns-5dd5756b68-r8pvw" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.550603  423858 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.550673  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-266395
	I0108 23:32:04.550686  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.550696  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.550705  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.553104  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.553123  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.553132  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.553141  423858 round_trippers.go:580]     Audit-Id: fef4a30f-7cfa-4507-bbdd-491d8d989bce
	I0108 23:32:04.553148  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.553156  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.553164  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.553172  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.553646  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-266395","namespace":"kube-system","uid":"ad57572e-a901-4042-b907-d0738c803dbd","resourceVersion":"865","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.18:2379","kubernetes.io/config.hash":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.mirror":"c3877d55338da5237c1c7dded8cd78f4","kubernetes.io/config.seen":"2024-01-08T23:17:58.693595452Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0108 23:32:04.554103  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:04.554123  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.554134  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.554143  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.556359  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.556378  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.556388  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.556395  423858 round_trippers.go:580]     Audit-Id: ea70ce5a-a91f-4d9a-9e4e-6952cfe84055
	I0108 23:32:04.556403  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.556410  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.556417  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.556429  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.556623  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:32:04.557001  423858 pod_ready.go:92] pod "etcd-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:04.557023  423858 pod_ready.go:81] duration metric: took 6.411068ms waiting for pod "etcd-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.557046  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.557111  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-266395
	I0108 23:32:04.557122  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.557132  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.557144  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.559202  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.559220  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.559228  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.559236  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.559244  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.559251  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.559259  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.559274  423858 round_trippers.go:580]     Audit-Id: c293d29d-9adc-4b74-9257-3f7eb94a2199
	I0108 23:32:04.559599  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-266395","namespace":"kube-system","uid":"70b0f39e-3999-4a5b-bae6-c08ae2adeb49","resourceVersion":"860","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.18:8443","kubernetes.io/config.hash":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.mirror":"693c20f812d77c22a17dccfbf3ed1fb9","kubernetes.io/config.seen":"2024-01-08T23:17:58.693588503Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0108 23:32:04.560080  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:04.560094  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.560102  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.560108  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.561949  423858 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 23:32:04.561970  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.561979  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.561989  423858 round_trippers.go:580]     Audit-Id: 5b4b390b-3627-4d9c-aafe-5cf29a3a1f49
	I0108 23:32:04.561997  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.562004  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.562013  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.562027  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.562145  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:32:04.562526  423858 pod_ready.go:92] pod "kube-apiserver-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:04.562548  423858 pod_ready.go:81] duration metric: took 5.489933ms waiting for pod "kube-apiserver-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.562560  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.562620  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-266395
	I0108 23:32:04.562632  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.562642  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.562650  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.565049  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.565075  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.565085  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.565092  423858 round_trippers.go:580]     Audit-Id: c8f1254b-3d4d-4e37-9e32-d3fec3a36e5b
	I0108 23:32:04.565100  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.565108  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.565116  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.565127  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.565295  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-266395","namespace":"kube-system","uid":"32b7c02b-f69c-46ac-ab67-d61a4077b5b2","resourceVersion":"850","creationTimestamp":"2024-01-08T23:17:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.mirror":"23f79a1dbfb4b47131ec4bff995f3d05","kubernetes.io/config.seen":"2024-01-08T23:17:49.571485221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0108 23:32:04.565785  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:04.565804  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.565814  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.565823  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.568747  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.568766  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.568774  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.568782  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.568792  423858 round_trippers.go:580]     Audit-Id: 10bd6a9c-4821-4399-8fa4-8150228a3e52
	I0108 23:32:04.568802  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.568811  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.568823  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.569128  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:32:04.569524  423858 pod_ready.go:92] pod "kube-controller-manager-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:04.569551  423858 pod_ready.go:81] duration metric: took 6.982169ms waiting for pod "kube-controller-manager-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.569566  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.730002  423858 request.go:629] Waited for 160.334988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:32:04.730083  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvmgf
	I0108 23:32:04.730090  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.730109  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.730121  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.733108  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:04.733132  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.733143  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.733153  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.733162  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.733171  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.733183  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.733194  423858 round_trippers.go:580]     Audit-Id: 5b4e9675-4b23-417d-b278-710511772854
	I0108 23:32:04.733567  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lvmgf","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c37677d-6832-4d6b-8f29-c23d25347535","resourceVersion":"796","creationTimestamp":"2024-01-08T23:18:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0108 23:32:04.930542  423858 request.go:629] Waited for 196.39249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:04.930623  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:04.930631  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:04.930644  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:04.930659  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:04.934190  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:32:04.934255  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:04.934273  423858 round_trippers.go:580]     Audit-Id: b11ab2f5-6fe6-47a2-967c-68a2a9816a73
	I0108 23:32:04.934282  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:04.934291  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:04.934298  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:04.934307  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:04.934314  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:04 GMT
	I0108 23:32:04.934639  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:32:04.935103  423858 pod_ready.go:92] pod "kube-proxy-lvmgf" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:04.935131  423858 pod_ready.go:81] duration metric: took 365.555339ms waiting for pod "kube-proxy-lvmgf" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:04.935144  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:05.130332  423858 request.go:629] Waited for 195.091094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:32:05.130418  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4q5n
	I0108 23:32:05.130427  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:05.130438  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:05.130452  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:05.133277  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:05.133307  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:05.133318  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:05.133327  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:05.133335  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:05 GMT
	I0108 23:32:05.133342  423858 round_trippers.go:580]     Audit-Id: 42184839-d737-4b5a-b84b-1d6a71584ac9
	I0108 23:32:05.133349  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:05.133358  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:05.133568  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v4q5n","generateName":"kube-proxy-","namespace":"kube-system","uid":"8ef0ea4c-f518-4179-9c48-4e1628a9752b","resourceVersion":"1045","creationTimestamp":"2024-01-08T23:18:55Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:18:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 23:32:05.330518  423858 request.go:629] Waited for 196.39875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:32:05.330610  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m02
	I0108 23:32:05.330620  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:05.330628  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:05.330634  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:05.333329  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:05.333356  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:05.333366  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:05.333374  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:05.333380  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:05.333388  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:05 GMT
	I0108 23:32:05.333395  423858 round_trippers.go:580]     Audit-Id: aae3609e-be51-4c2c-ad10-c7614151dfaa
	I0108 23:32:05.333402  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:05.333586  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m02","uid":"ac068e1a-04e7-4b19-9f0f-13e0f582f5a0","resourceVersion":"1207","creationTimestamp":"2024-01-08T23:30:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_32_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:30:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 23:32:05.333866  423858 pod_ready.go:92] pod "kube-proxy-v4q5n" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:05.333888  423858 pod_ready.go:81] duration metric: took 398.733544ms waiting for pod "kube-proxy-v4q5n" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:05.333900  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:05.530611  423858 request.go:629] Waited for 196.610809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:32:05.530683  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbq4b
	I0108 23:32:05.530688  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:05.530697  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:05.530703  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:05.533460  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:05.533487  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:05.533496  423858 round_trippers.go:580]     Audit-Id: fa974495-5fe3-4549-a358-7ff2b82df602
	I0108 23:32:05.533504  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:05.533511  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:05.533518  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:05.533525  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:05.533532  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:05 GMT
	I0108 23:32:05.533754  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vbq4b","generateName":"kube-proxy-","namespace":"kube-system","uid":"f4b0965a-b7bc-4a1a-8fc2-1397277c3710","resourceVersion":"1225","creationTimestamp":"2024-01-08T23:19:45Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e92da278-0f24-44c0-ab91-c0c7be881952","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:19:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e92da278-0f24-44c0-ab91-c0c7be881952\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 23:32:05.730676  423858 request.go:629] Waited for 196.487628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:32:05.730755  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395-m03
	I0108 23:32:05.730761  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:05.730769  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:05.730775  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:05.733763  423858 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 23:32:05.733789  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:05.733802  423858 round_trippers.go:580]     Audit-Id: 4909bdf9-f14e-4ca6-91ab-4ba41444ea9f
	I0108 23:32:05.733810  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:05.733819  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:05.733827  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:05.733836  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:05.733844  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:05 GMT
	I0108 23:32:05.734019  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395-m03","uid":"80598f48-816b-4d3a-b9ec-c68a967b82db","resourceVersion":"1208","creationTimestamp":"2024-01-08T23:32:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T23_32_03_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:32:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 23:32:05.734374  423858 pod_ready.go:92] pod "kube-proxy-vbq4b" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:05.734398  423858 pod_ready.go:81] duration metric: took 400.48888ms waiting for pod "kube-proxy-vbq4b" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:05.734412  423858 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:05.930452  423858 request.go:629] Waited for 195.934741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:32:05.930525  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-266395
	I0108 23:32:05.930535  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:05.930549  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:05.930559  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:05.938193  423858 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 23:32:05.938216  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:05.938224  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:05.938231  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:05 GMT
	I0108 23:32:05.938240  423858 round_trippers.go:580]     Audit-Id: 5c8f4a91-8903-4cd4-87b0-d7221bf38c8c
	I0108 23:32:05.938248  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:05.938263  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:05.938270  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:05.938905  423858 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-266395","namespace":"kube-system","uid":"df5e2822-435f-4264-854b-929b6acccd99","resourceVersion":"847","creationTimestamp":"2024-01-08T23:17:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.mirror":"54274c879f4fed7fb51beb6c8ca6c27b","kubernetes.io/config.seen":"2024-01-08T23:17:58.693594221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T23:17:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0108 23:32:06.130689  423858 request.go:629] Waited for 191.410245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:06.130797  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes/multinode-266395
	I0108 23:32:06.130809  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:06.130820  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:06.130833  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:06.133858  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:32:06.133889  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:06.133899  423858 round_trippers.go:580]     Audit-Id: c0860be6-2a5b-4006-9ce9-4945bfb0cfd2
	I0108 23:32:06.133906  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:06.133913  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:06.133920  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:06.133926  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:06.133936  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:06 GMT
	I0108 23:32:06.134171  423858 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T23:17:55Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0108 23:32:06.134624  423858 pod_ready.go:92] pod "kube-scheduler-multinode-266395" in "kube-system" namespace has status "Ready":"True"
	I0108 23:32:06.134654  423858 pod_ready.go:81] duration metric: took 400.229998ms waiting for pod "kube-scheduler-multinode-266395" in "kube-system" namespace to be "Ready" ...
	I0108 23:32:06.134679  423858 pod_ready.go:38] duration metric: took 1.601345047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:32:06.134701  423858 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:32:06.134762  423858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:32:06.148906  423858 system_svc.go:56] duration metric: took 14.1958ms WaitForService to wait for kubelet.
	I0108 23:32:06.148952  423858 kubeadm.go:581] duration metric: took 1.646247387s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:32:06.148978  423858 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:32:06.330479  423858 request.go:629] Waited for 181.412356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.18:8443/api/v1/nodes
	I0108 23:32:06.330551  423858 round_trippers.go:463] GET https://192.168.39.18:8443/api/v1/nodes
	I0108 23:32:06.330557  423858 round_trippers.go:469] Request Headers:
	I0108 23:32:06.330565  423858 round_trippers.go:473]     Accept: application/json, */*
	I0108 23:32:06.330571  423858 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 23:32:06.333949  423858 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 23:32:06.333969  423858 round_trippers.go:577] Response Headers:
	I0108 23:32:06.333976  423858 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4fb01211-10df-4db3-9b04-77713e907a4a
	I0108 23:32:06.333982  423858 round_trippers.go:580]     Date: Mon, 08 Jan 2024 23:32:06 GMT
	I0108 23:32:06.333987  423858 round_trippers.go:580]     Audit-Id: 8b46ef65-e86c-4df7-89e0-c24a7ced17e3
	I0108 23:32:06.333992  423858 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 23:32:06.333998  423858 round_trippers.go:580]     Content-Type: application/json
	I0108 23:32:06.334003  423858 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e5c23012-0c7d-447d-9db0-dc3ba6fdf570
	I0108 23:32:06.334301  423858 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1227"},"items":[{"metadata":{"name":"multinode-266395","uid":"8995c43a-4e31-4b19-b0c8-f3e67e52d25b","resourceVersion":"892","creationTimestamp":"2024-01-08T23:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-266395","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-266395","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T23_17_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16238 chars]
	I0108 23:32:06.335153  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:32:06.335222  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:32:06.335267  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:32:06.335274  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:32:06.335282  423858 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:32:06.335291  423858 node_conditions.go:123] node cpu capacity is 2
	I0108 23:32:06.335298  423858 node_conditions.go:105] duration metric: took 186.312999ms to run NodePressure ...
	I0108 23:32:06.335314  423858 start.go:228] waiting for startup goroutines ...
	I0108 23:32:06.335384  423858 start.go:242] writing updated cluster config ...
	I0108 23:32:06.335808  423858 ssh_runner.go:195] Run: rm -f paused
	I0108 23:32:06.398847  423858 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 23:32:06.401953  423858 out.go:177] * Done! kubectl is now configured to use "multinode-266395" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 23:27:48 UTC, ends at Mon 2024-01-08 23:32:07 UTC. --
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.593538559Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0148ada5bb3dcfed7e35f6a165a379a57b055fbc2cf0974daa1442032f00f50c,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-nl6pn,Uid:72697c77-17fa-4588-9f0f-c41eaad79e47,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756517643157099,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:28:21.568468146Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0e552bf587c9f4af76888b405d23ae9d97ee5c4ca825af77e2f51df257aa641,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-r8pvw,Uid:5300c187-4f1f-4330-ae19-6bf2855763f2,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1704756517638828944,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:28:21.568434695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0c9698b8bc15549b4641f613fce7e16f49c319e96379f8c3d023c7535bdf768d,Metadata:&PodSandboxMetadata{Name:kindnet-mnltq,Uid:c65752e0-cd30-49cf-9645-5befeecc3d34,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756501956949169,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c65752e0-cd30-49cf-9645-5befeecc3d34,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-01-08T23:28:21.568438847Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f15dcd0d-59b5-4f16-94c7-425f162c60ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756501931878130,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T23:28:21.568433019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d474131f3b28e9abd7b100d576e43e4f7b827740c4bec50ce9a1e4066917a90,Metadata:&PodSandboxMetadata{Name:kube-proxy-lvmgf,Uid:9c37677d-6832-4d6b-8f29-c23d25347535,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756501896925049,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d25347535,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T23:28:21.568426219Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e521481cf1afe0e075f1bee4193fa4d98e528fc59630b9fac5950d7ba5bf1c8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-266395,Uid:54274c879f4fed7fb51beb6c8ca6c27b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756496077656937,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 54274c879f4fed7fb51beb6c8ca6c27b,kubernetes.io/config.seen: 2024-01-08T23:28:15.560317225Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80e79b595b52f2a2b414abe46b412e50251c76b6c01487268c094535aa03434f,Metadata:&PodSandboxMetadata{Name:etcd-multinode-2663
95,Uid:c3877d55338da5237c1c7dded8cd78f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756496069471273,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.18:2379,kubernetes.io/config.hash: c3877d55338da5237c1c7dded8cd78f4,kubernetes.io/config.seen: 2024-01-08T23:28:15.560310926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:37ab957e5b5955db093b2d1a535ed5e259554bc06d5a58ef47f9b7ca33799e2e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-266395,Uid:23f79a1dbfb4b47131ec4bff995f3d05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756496062307729,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 23f79a1dbfb4b47131ec4bff995f3d05,kubernetes.io/config.seen: 2024-01-08T23:28:15.560316244Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:213bda2a1e5e3faf0f7a1613634db2c7b5fffc0cfcd9c2f33c43c8339178aa91,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-266395,Uid:693c20f812d77c22a17dccfbf3ed1fb9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704756496058889665,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.18:8443,kubernet
es.io/config.hash: 693c20f812d77c22a17dccfbf3ed1fb9,kubernetes.io/config.seen: 2024-01-08T23:28:15.560315010Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=95d28a98-f2fd-416e-abca-ceda141ecc84 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.594278980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c638f6e8-4001-4e8c-8231-69c4ee979cc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.594357463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c638f6e8-4001-4e8c-8231-69c4ee979cc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.594547003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:400a936610e860740363360e8ccf9ad42b553f3b8e7c714e651eae26a06b7d97,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704756533811195471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b9fb4fad6cd0336e433b4e1fbef0e6b22fb91a554f56767c63aebd4d88ab7f,PodSandboxId:0148ada5bb3dcfed7e35f6a165a379a57b055fbc2cf0974daa1442032f00f50c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704756519196198754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413e611ab7e26f12dc6ba8cc6ee4b07b32ac22f3b38ace7dc8b04e36ceb914c,PodSandboxId:b0e552bf587c9f4af76888b405d23ae9d97ee5c4ca825af77e2f51df257aa641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704756518414567744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98228bce7d5d2c85811d112163a382076e4374d9e49f8698277b65e40932ceeb,PodSandboxId:0c9698b8bc15549b4641f613fce7e16f49c319e96379f8c3d023c7535bdf768d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704756505041244224,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aacc643e910c7972cdede6f829771825cdac6917a0aa330d00366ccce66fced8,PodSandboxId:4d474131f3b28e9abd7b100d576e43e4f7b827740c4bec50ce9a1e4066917a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704756502381716909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d25
347535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c998d0305fa7ec45ea79099ab8c3903e65b5074d0663d21f248c61009580f88,PodSandboxId:80e79b595b52f2a2b414abe46b412e50251c76b6c01487268c094535aa03434f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704756497215142317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes
.container.hash: 2fa57b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a924c54057249bc44a9876e427de27d3d2131f1bd6850604781ed7ea1ff13141,PodSandboxId:1e521481cf1afe0e075f1bee4193fa4d98e528fc59630b9fac5950d7ba5bf1c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704756496909334859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8ddb85d7fc9b10c4b897fd2891ae557edb2790646d84d5946750ff551caad,PodSandboxId:37ab957e5b5955db093b2d1a535ed5e259554bc06d5a58ef47f9b7ca33799e2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704756496832528784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac1178743dee86434ca6a69f89f7ed0a5944f5dda9eac8a5e21025354ccde67,PodSandboxId:213bda2a1e5e3faf0f7a1613634db2c7b5fffc0cfcd9c2f33c43c8339178aa91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704756496557842743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes
.container.hash: 9187ed9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c638f6e8-4001-4e8c-8231-69c4ee979cc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.631508771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c23d8b9b-8288-476a-a46b-9538726d8ea5 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.631564117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c23d8b9b-8288-476a-a46b-9538726d8ea5 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.633095437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f7b387f7-d637-4ad7-84e7-b4a690b79798 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.633473936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704756727633457441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f7b387f7-d637-4ad7-84e7-b4a690b79798 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.634172730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9eddad56-5109-46e5-8bdc-baec1f0132e5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.634217538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9eddad56-5109-46e5-8bdc-baec1f0132e5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.634415256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:400a936610e860740363360e8ccf9ad42b553f3b8e7c714e651eae26a06b7d97,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704756533811195471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b9fb4fad6cd0336e433b4e1fbef0e6b22fb91a554f56767c63aebd4d88ab7f,PodSandboxId:0148ada5bb3dcfed7e35f6a165a379a57b055fbc2cf0974daa1442032f00f50c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704756519196198754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413e611ab7e26f12dc6ba8cc6ee4b07b32ac22f3b38ace7dc8b04e36ceb914c,PodSandboxId:b0e552bf587c9f4af76888b405d23ae9d97ee5c4ca825af77e2f51df257aa641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704756518414567744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98228bce7d5d2c85811d112163a382076e4374d9e49f8698277b65e40932ceeb,PodSandboxId:0c9698b8bc15549b4641f613fce7e16f49c319e96379f8c3d023c7535bdf768d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704756505041244224,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58cac4b789316b93f88c08bb28ec2b7a744b57eb8ba26594eeaa66325d6219af,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704756502499281332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aacc643e910c7972cdede6f829771825cdac6917a0aa330d00366ccce66fced8,PodSandboxId:4d474131f3b28e9abd7b100d576e43e4f7b827740c4bec50ce9a1e4066917a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704756502381716909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d2534
7535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c998d0305fa7ec45ea79099ab8c3903e65b5074d0663d21f248c61009580f88,PodSandboxId:80e79b595b52f2a2b414abe46b412e50251c76b6c01487268c094535aa03434f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704756497215142317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2fa57b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a924c54057249bc44a9876e427de27d3d2131f1bd6850604781ed7ea1ff13141,PodSandboxId:1e521481cf1afe0e075f1bee4193fa4d98e528fc59630b9fac5950d7ba5bf1c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704756496909334859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8ddb85d7fc9b10c4b897fd2891ae557edb2790646d84d5946750ff551caad,PodSandboxId:37ab957e5b5955db093b2d1a535ed5e259554bc06d5a58ef47f9b7ca33799e2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704756496832528784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac1178743dee86434ca6a69f89f7ed0a5944f5dda9eac8a5e21025354ccde67,PodSandboxId:213bda2a1e5e3faf0f7a1613634db2c7b5fffc0cfcd9c2f33c43c8339178aa91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704756496557842743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9187ed9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9eddad56-5109-46e5-8bdc-baec1f0132e5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.674829113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fc529c98-3169-4e2c-a4a7-974012eba133 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.674888854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fc529c98-3169-4e2c-a4a7-974012eba133 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.677563647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e40f48f7-96a7-45f4-a13c-e402cb705197 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.678954922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704756727678936133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e40f48f7-96a7-45f4-a13c-e402cb705197 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.679973313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5d3bdbb5-1c8d-4920-9a27-09b8a88142e0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.680026346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5d3bdbb5-1c8d-4920-9a27-09b8a88142e0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.680813170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:400a936610e860740363360e8ccf9ad42b553f3b8e7c714e651eae26a06b7d97,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704756533811195471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b9fb4fad6cd0336e433b4e1fbef0e6b22fb91a554f56767c63aebd4d88ab7f,PodSandboxId:0148ada5bb3dcfed7e35f6a165a379a57b055fbc2cf0974daa1442032f00f50c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704756519196198754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413e611ab7e26f12dc6ba8cc6ee4b07b32ac22f3b38ace7dc8b04e36ceb914c,PodSandboxId:b0e552bf587c9f4af76888b405d23ae9d97ee5c4ca825af77e2f51df257aa641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704756518414567744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98228bce7d5d2c85811d112163a382076e4374d9e49f8698277b65e40932ceeb,PodSandboxId:0c9698b8bc15549b4641f613fce7e16f49c319e96379f8c3d023c7535bdf768d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704756505041244224,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58cac4b789316b93f88c08bb28ec2b7a744b57eb8ba26594eeaa66325d6219af,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704756502499281332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aacc643e910c7972cdede6f829771825cdac6917a0aa330d00366ccce66fced8,PodSandboxId:4d474131f3b28e9abd7b100d576e43e4f7b827740c4bec50ce9a1e4066917a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704756502381716909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d2534
7535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c998d0305fa7ec45ea79099ab8c3903e65b5074d0663d21f248c61009580f88,PodSandboxId:80e79b595b52f2a2b414abe46b412e50251c76b6c01487268c094535aa03434f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704756497215142317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2fa57b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a924c54057249bc44a9876e427de27d3d2131f1bd6850604781ed7ea1ff13141,PodSandboxId:1e521481cf1afe0e075f1bee4193fa4d98e528fc59630b9fac5950d7ba5bf1c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704756496909334859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8ddb85d7fc9b10c4b897fd2891ae557edb2790646d84d5946750ff551caad,PodSandboxId:37ab957e5b5955db093b2d1a535ed5e259554bc06d5a58ef47f9b7ca33799e2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704756496832528784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac1178743dee86434ca6a69f89f7ed0a5944f5dda9eac8a5e21025354ccde67,PodSandboxId:213bda2a1e5e3faf0f7a1613634db2c7b5fffc0cfcd9c2f33c43c8339178aa91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704756496557842743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9187ed9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5d3bdbb5-1c8d-4920-9a27-09b8a88142e0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.727125048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0344874b-7785-4c64-ac5b-c4d96df5a926 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.727206174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0344874b-7785-4c64-ac5b-c4d96df5a926 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.728593051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=899a2d74-8acc-46c3-93ff-1b44f6e6e184 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.729079492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704756727729065182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=899a2d74-8acc-46c3-93ff-1b44f6e6e184 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.729714894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=942d0a96-12e8-4c95-8ee0-b949278831ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.729824082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=942d0a96-12e8-4c95-8ee0-b949278831ad name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:32:07 multinode-266395 crio[716]: time="2024-01-08 23:32:07.730011420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:400a936610e860740363360e8ccf9ad42b553f3b8e7c714e651eae26a06b7d97,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704756533811195471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b9fb4fad6cd0336e433b4e1fbef0e6b22fb91a554f56767c63aebd4d88ab7f,PodSandboxId:0148ada5bb3dcfed7e35f6a165a379a57b055fbc2cf0974daa1442032f00f50c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704756519196198754,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-nl6pn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 72697c77-17fa-4588-9f0f-c41eaad79e47,},Annotations:map[string]string{io.kubernetes.container.hash: 33395ea2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6413e611ab7e26f12dc6ba8cc6ee4b07b32ac22f3b38ace7dc8b04e36ceb914c,PodSandboxId:b0e552bf587c9f4af76888b405d23ae9d97ee5c4ca825af77e2f51df257aa641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704756518414567744,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-r8pvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5300c187-4f1f-4330-ae19-6bf2855763f2,},Annotations:map[string]string{io.kubernetes.container.hash: 58d2816b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98228bce7d5d2c85811d112163a382076e4374d9e49f8698277b65e40932ceeb,PodSandboxId:0c9698b8bc15549b4641f613fce7e16f49c319e96379f8c3d023c7535bdf768d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704756505041244224,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-mnltq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c65752e0-cd30-49cf-9645-5befeecc3d34,},Annotations:map[string]string{io.kubernetes.container.hash: b9923ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58cac4b789316b93f88c08bb28ec2b7a744b57eb8ba26594eeaa66325d6219af,PodSandboxId:5233b0c9781bbb2595bf96bcea561ca76adb5812da739d3e32e54bd3c6e8a233,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704756502499281332,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: f15dcd0d-59b5-4f16-94c7-425f162c60ad,},Annotations:map[string]string{io.kubernetes.container.hash: fef16a1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aacc643e910c7972cdede6f829771825cdac6917a0aa330d00366ccce66fced8,PodSandboxId:4d474131f3b28e9abd7b100d576e43e4f7b827740c4bec50ce9a1e4066917a90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704756502381716909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvmgf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c37677d-6832-4d6b-8f29-c23d2534
7535,},Annotations:map[string]string{io.kubernetes.container.hash: 7259ae62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c998d0305fa7ec45ea79099ab8c3903e65b5074d0663d21f248c61009580f88,PodSandboxId:80e79b595b52f2a2b414abe46b412e50251c76b6c01487268c094535aa03434f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704756497215142317,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3877d55338da5237c1c7dded8cd78f4,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2fa57b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a924c54057249bc44a9876e427de27d3d2131f1bd6850604781ed7ea1ff13141,PodSandboxId:1e521481cf1afe0e075f1bee4193fa4d98e528fc59630b9fac5950d7ba5bf1c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704756496909334859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54274c879f4fed7fb51beb6c8ca6c27b,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64f8ddb85d7fc9b10c4b897fd2891ae557edb2790646d84d5946750ff551caad,PodSandboxId:37ab957e5b5955db093b2d1a535ed5e259554bc06d5a58ef47f9b7ca33799e2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704756496832528784,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23f79a1dbfb4b47131ec4bff995f3d05,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac1178743dee86434ca6a69f89f7ed0a5944f5dda9eac8a5e21025354ccde67,PodSandboxId:213bda2a1e5e3faf0f7a1613634db2c7b5fffc0cfcd9c2f33c43c8339178aa91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704756496557842743,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-266395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 693c20f812d77c22a17dccfbf3ed1fb9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9187ed9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=942d0a96-12e8-4c95-8ee0-b949278831ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	400a936610e86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   5233b0c9781bb       storage-provisioner
	37b9fb4fad6cd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   0148ada5bb3dc       busybox-5bc68d56bd-nl6pn
	6413e611ab7e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   b0e552bf587c9       coredns-5dd5756b68-r8pvw
	98228bce7d5d2       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   0c9698b8bc155       kindnet-mnltq
	58cac4b789316       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   5233b0c9781bb       storage-provisioner
	aacc643e910c7       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   4d474131f3b28       kube-proxy-lvmgf
	5c998d0305fa7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   80e79b595b52f       etcd-multinode-266395
	a924c54057249       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   1e521481cf1af       kube-scheduler-multinode-266395
	64f8ddb85d7fc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   37ab957e5b595       kube-controller-manager-multinode-266395
	6ac1178743dee       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   213bda2a1e5e3       kube-apiserver-multinode-266395
	
	
	==> coredns [6413e611ab7e26f12dc6ba8cc6ee4b07b32ac22f3b38ace7dc8b04e36ceb914c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40693 - 580 "HINFO IN 4694103633222490257.4407653100675984352. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032595331s
	
	
	==> describe nodes <==
	Name:               multinode-266395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-266395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T23_17_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:17:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-266395
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:32:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:28:51 +0000   Mon, 08 Jan 2024 23:17:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:28:51 +0000   Mon, 08 Jan 2024 23:17:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:28:51 +0000   Mon, 08 Jan 2024 23:17:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:28:51 +0000   Mon, 08 Jan 2024 23:28:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    multinode-266395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d716d46d1ee14efebe781ccf0f9b5f7a
	  System UUID:                d716d46d-1ee1-4efe-be78-1ccf0f9b5f7a
	  Boot ID:                    a44c1c8f-d037-4e88-a8d8-227f8880f304
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nl6pn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-r8pvw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-266395                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-mnltq                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-266395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-266395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-lvmgf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-266395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-266395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-266395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-266395 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-266395 event: Registered Node multinode-266395 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-266395 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node multinode-266395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node multinode-266395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node multinode-266395 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m34s                  node-controller  Node multinode-266395 event: Registered Node multinode-266395 in Controller
	
	
	Name:               multinode-266395-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-266395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T23_32_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:30:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-266395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:32:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:30:19 +0000   Mon, 08 Jan 2024 23:30:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:30:19 +0000   Mon, 08 Jan 2024 23:30:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:30:19 +0000   Mon, 08 Jan 2024 23:30:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:30:19 +0000   Mon, 08 Jan 2024 23:30:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    multinode-266395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ceecfa7e5c428fa6b8738e4703b92d
	  System UUID:                65ceecfa-7e5c-428f-a6b8-738e4703b92d
	  Boot ID:                    04dd2ce7-2fa0-4124-926c-344f5d0f9405
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hd6qv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-fcjt6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-v4q5n            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node multinode-266395-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node multinode-266395-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node multinode-266395-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet          Node multinode-266395-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m56s                  kubelet          Node multinode-266395-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m15s (x2 over 3m15s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 109s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  109s (x2 over 109s)    kubelet          Node multinode-266395-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s (x2 over 109s)    kubelet          Node multinode-266395-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s (x2 over 109s)    kubelet          Node multinode-266395-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  109s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                109s                   kubelet          Node multinode-266395-m02 status is now: NodeReady
	  Normal   RegisteredNode           105s                   node-controller  Node multinode-266395-m02 event: Registered Node multinode-266395-m02 in Controller
	
	
	Name:               multinode-266395-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-266395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-266395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T23_32_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-266395-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:32:03 +0000   Mon, 08 Jan 2024 23:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:32:03 +0000   Mon, 08 Jan 2024 23:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:32:03 +0000   Mon, 08 Jan 2024 23:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:32:03 +0000   Mon, 08 Jan 2024 23:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    multinode-266395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 01362d0f25ef448c9bde293cba7d126d
	  System UUID:                01362d0f-25ef-448c-9bde-293cba7d126d
	  Boot ID:                    d59d34ba-9451-4249-9672-21bcfb6442db
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wcrzw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kindnet-brbnm               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-vbq4b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 2s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-266395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-266395-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-266395-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-266395-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-266395-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-266395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-266395-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-266395-m03 status is now: NodeReady
	  Normal   NodeNotReady             71s                 kubelet     Node multinode-266395-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        42s (x2 over 102s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-266395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-266395-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-266395-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-266395-m03 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Jan 8 23:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.352685] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.450963] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152842] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.570357] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.532372] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.113045] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.148870] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.114358] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.207535] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Jan 8 23:28] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[ +19.068772] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [5c998d0305fa7ec45ea79099ab8c3903e65b5074d0663d21f248c61009580f88] <==
	{"level":"info","ts":"2024-01-08T23:28:18.925318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3959cc3c468ccbd1","local-member-id":"d6d01a71dfc61a14","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:28:18.92536Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:28:18.934406Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T23:28:18.938544Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d6d01a71dfc61a14","initial-advertise-peer-urls":["https://192.168.39.18:2380"],"listen-peer-urls":["https://192.168.39.18:2380"],"advertise-client-urls":["https://192.168.39.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T23:28:18.938823Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T23:28:18.938156Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d6d01a71dfc61a14","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T23:28:18.938223Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T23:28:18.939327Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T23:28:18.939383Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T23:28:18.938364Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-01-08T23:28:18.939466Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-01-08T23:28:19.276707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T23:28:19.276847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T23:28:19.276862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgPreVoteResp from d6d01a71dfc61a14 at term 2"}
	{"level":"info","ts":"2024-01-08T23:28:19.276873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T23:28:19.276879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgVoteResp from d6d01a71dfc61a14 at term 3"}
	{"level":"info","ts":"2024-01-08T23:28:19.276887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T23:28:19.276894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d6d01a71dfc61a14 elected leader d6d01a71dfc61a14 at term 3"}
	{"level":"info","ts":"2024-01-08T23:28:19.278528Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d6d01a71dfc61a14","local-member-attributes":"{Name:multinode-266395 ClientURLs:[https://192.168.39.18:2379]}","request-path":"/0/members/d6d01a71dfc61a14/attributes","cluster-id":"3959cc3c468ccbd1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T23:28:19.27854Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:28:19.27887Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:28:19.279547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T23:28:19.280188Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.18:2379"}
	{"level":"info","ts":"2024-01-08T23:28:19.280671Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T23:28:19.280726Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:32:08 up 4 min,  0 users,  load average: 0.47, 0.40, 0.18
	Linux multinode-266395 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [98228bce7d5d2c85811d112163a382076e4374d9e49f8698277b65e40932ceeb] <==
	I0108 23:31:36.818402       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:31:36.818451       1 main.go:227] handling current node
	I0108 23:31:36.818468       1 main.go:223] Handling node with IPs: map[192.168.39.214:{}]
	I0108 23:31:36.818474       1 main.go:250] Node multinode-266395-m02 has CIDR [10.244.1.0/24] 
	I0108 23:31:36.818662       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 23:31:36.818785       1 main.go:250] Node multinode-266395-m03 has CIDR [10.244.3.0/24] 
	I0108 23:31:46.824125       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:31:46.824176       1 main.go:227] handling current node
	I0108 23:31:46.824188       1 main.go:223] Handling node with IPs: map[192.168.39.214:{}]
	I0108 23:31:46.824195       1 main.go:250] Node multinode-266395-m02 has CIDR [10.244.1.0/24] 
	I0108 23:31:46.824321       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 23:31:46.824356       1 main.go:250] Node multinode-266395-m03 has CIDR [10.244.3.0/24] 
	I0108 23:31:56.837837       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:31:56.837899       1 main.go:227] handling current node
	I0108 23:31:56.837915       1 main.go:223] Handling node with IPs: map[192.168.39.214:{}]
	I0108 23:31:56.837931       1 main.go:250] Node multinode-266395-m02 has CIDR [10.244.1.0/24] 
	I0108 23:31:56.838077       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 23:31:56.838120       1 main.go:250] Node multinode-266395-m03 has CIDR [10.244.3.0/24] 
	I0108 23:32:06.854244       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0108 23:32:06.854409       1 main.go:227] handling current node
	I0108 23:32:06.854486       1 main.go:223] Handling node with IPs: map[192.168.39.214:{}]
	I0108 23:32:06.854496       1 main.go:250] Node multinode-266395-m02 has CIDR [10.244.1.0/24] 
	I0108 23:32:06.854623       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0108 23:32:06.854628       1 main.go:250] Node multinode-266395-m03 has CIDR [10.244.2.0/24] 
	I0108 23:32:06.854890       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.239 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [6ac1178743dee86434ca6a69f89f7ed0a5944f5dda9eac8a5e21025354ccde67] <==
	I0108 23:28:20.759852       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0108 23:28:20.828428       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 23:28:20.829045       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 23:28:20.906407       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 23:28:20.954482       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 23:28:20.955306       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 23:28:20.955346       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 23:28:20.955387       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 23:28:20.959886       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 23:28:20.959961       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 23:28:20.967320       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 23:28:20.967375       1 aggregator.go:166] initial CRD sync complete...
	I0108 23:28:20.967382       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 23:28:20.967387       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 23:28:20.967391       1 cache.go:39] Caches are synced for autoregister controller
	E0108 23:28:20.981468       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0108 23:28:20.998045       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 23:28:21.761117       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 23:28:23.700951       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 23:28:23.837863       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 23:28:23.852623       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 23:28:23.949381       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 23:28:23.956497       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 23:28:34.200498       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 23:28:34.301643       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [64f8ddb85d7fc9b10c4b897fd2891ae557edb2790646d84d5946750ff551caad] <==
	I0108 23:30:19.571634       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m03"
	I0108 23:30:19.571968       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-266395-m02\" does not exist"
	I0108 23:30:19.572854       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wz22p" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wz22p"
	I0108 23:30:19.604659       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-266395-m02" podCIDRs=["10.244.1.0/24"]
	I0108 23:30:19.621962       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m02"
	I0108 23:30:20.470532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.248µs"
	I0108 23:30:23.995680       1 event.go:307] "Event occurred" object="multinode-266395-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-266395-m02 event: Registered Node multinode-266395-m02 in Controller"
	I0108 23:30:33.754871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="143.852µs"
	I0108 23:30:34.337700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.233µs"
	I0108 23:30:34.347002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.671µs"
	I0108 23:30:57.529459       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m02"
	I0108 23:31:59.649820       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hd6qv"
	I0108 23:31:59.664538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.322476ms"
	I0108 23:31:59.687589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.986415ms"
	I0108 23:31:59.687696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.479µs"
	I0108 23:31:59.704179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.724µs"
	I0108 23:32:01.603616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.929138ms"
	I0108 23:32:01.604529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.864µs"
	I0108 23:32:02.659892       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m02"
	I0108 23:32:03.301852       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m02"
	I0108 23:32:03.304466       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-wcrzw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-wcrzw"
	I0108 23:32:03.305086       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-266395-m03\" does not exist"
	I0108 23:32:03.328143       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-266395-m03" podCIDRs=["10.244.2.0/24"]
	I0108 23:32:03.642097       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-266395-m02"
	I0108 23:32:04.284269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.12µs"
	
	
	==> kube-proxy [aacc643e910c7972cdede6f829771825cdac6917a0aa330d00366ccce66fced8] <==
	I0108 23:28:22.724606       1 server_others.go:69] "Using iptables proxy"
	I0108 23:28:22.740080       1 node.go:141] Successfully retrieved node IP: 192.168.39.18
	I0108 23:28:22.944615       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 23:28:22.944668       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 23:28:22.947338       1 server_others.go:152] "Using iptables Proxier"
	I0108 23:28:22.947372       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 23:28:22.947512       1 server.go:846] "Version info" version="v1.28.4"
	I0108 23:28:22.947520       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 23:28:22.953426       1 config.go:188] "Starting service config controller"
	I0108 23:28:22.953446       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 23:28:22.953461       1 config.go:97] "Starting endpoint slice config controller"
	I0108 23:28:22.953464       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 23:28:22.955305       1 config.go:315] "Starting node config controller"
	I0108 23:28:22.955312       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 23:28:23.054473       1 shared_informer.go:318] Caches are synced for service config
	I0108 23:28:23.054661       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 23:28:23.056141       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a924c54057249bc44a9876e427de27d3d2131f1bd6850604781ed7ea1ff13141] <==
	I0108 23:28:19.013180       1 serving.go:348] Generated self-signed cert in-memory
	W0108 23:28:20.887239       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 23:28:20.887322       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 23:28:20.887350       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 23:28:20.888023       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 23:28:20.927202       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 23:28:20.927350       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 23:28:20.934488       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 23:28:20.934535       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 23:28:20.935634       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 23:28:20.935712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 23:28:21.035181       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 23:27:48 UTC, ends at Mon 2024-01-08 23:32:08 UTC. --
	Jan 08 23:28:25 multinode-266395 kubelet[921]: E0108 23:28:25.591665     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-r8pvw" podUID="5300c187-4f1f-4330-ae19-6bf2855763f2"
	Jan 08 23:28:25 multinode-266395 kubelet[921]: E0108 23:28:25.592471     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-nl6pn" podUID="72697c77-17fa-4588-9f0f-c41eaad79e47"
	Jan 08 23:28:25 multinode-266395 kubelet[921]: E0108 23:28:25.643567     921 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 08 23:28:27 multinode-266395 kubelet[921]: E0108 23:28:27.591587     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-nl6pn" podUID="72697c77-17fa-4588-9f0f-c41eaad79e47"
	Jan 08 23:28:27 multinode-266395 kubelet[921]: E0108 23:28:27.592703     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-r8pvw" podUID="5300c187-4f1f-4330-ae19-6bf2855763f2"
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.258721     921 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.258981     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5300c187-4f1f-4330-ae19-6bf2855763f2-config-volume podName:5300c187-4f1f-4330-ae19-6bf2855763f2 nodeName:}" failed. No retries permitted until 2024-01-08 23:28:37.258944632 +0000 UTC m=+21.932367465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5300c187-4f1f-4330-ae19-6bf2855763f2-config-volume") pod "coredns-5dd5756b68-r8pvw" (UID: "5300c187-4f1f-4330-ae19-6bf2855763f2") : object "kube-system"/"coredns" not registered
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.359324     921 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.359371     921 projected.go:198] Error preparing data for projected volume kube-api-access-qgcqb for pod default/busybox-5bc68d56bd-nl6pn: object "default"/"kube-root-ca.crt" not registered
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.359478     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72697c77-17fa-4588-9f0f-c41eaad79e47-kube-api-access-qgcqb podName:72697c77-17fa-4588-9f0f-c41eaad79e47 nodeName:}" failed. No retries permitted until 2024-01-08 23:28:37.359452551 +0000 UTC m=+22.032875385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qgcqb" (UniqueName: "kubernetes.io/projected/72697c77-17fa-4588-9f0f-c41eaad79e47-kube-api-access-qgcqb") pod "busybox-5bc68d56bd-nl6pn" (UID: "72697c77-17fa-4588-9f0f-c41eaad79e47") : object "default"/"kube-root-ca.crt" not registered
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.591226     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-r8pvw" podUID="5300c187-4f1f-4330-ae19-6bf2855763f2"
	Jan 08 23:28:29 multinode-266395 kubelet[921]: E0108 23:28:29.591967     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-nl6pn" podUID="72697c77-17fa-4588-9f0f-c41eaad79e47"
	Jan 08 23:28:53 multinode-266395 kubelet[921]: I0108 23:28:53.786697     921 scope.go:117] "RemoveContainer" containerID="58cac4b789316b93f88c08bb28ec2b7a744b57eb8ba26594eeaa66325d6219af"
	Jan 08 23:29:15 multinode-266395 kubelet[921]: E0108 23:29:15.607127     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 23:29:15 multinode-266395 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 23:29:15 multinode-266395 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 23:29:15 multinode-266395 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 23:30:15 multinode-266395 kubelet[921]: E0108 23:30:15.631964     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 23:30:15 multinode-266395 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 23:30:15 multinode-266395 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 23:30:15 multinode-266395 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 23:31:15 multinode-266395 kubelet[921]: E0108 23:31:15.608576     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 23:31:15 multinode-266395 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 23:31:15 multinode-266395 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 23:31:15 multinode-266395 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-266395 -n multinode-266395
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-266395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (691.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266395 stop: exit status 82 (2m1.727019423s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-266395"  ...
	* Stopping node "multinode-266395"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-266395 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status
E0108 23:34:19.627872  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266395 status: exit status 3 (18.632116631s)

                                                
                                                
-- stdout --
	multinode-266395
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-266395-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:34:31.523733  426190 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0108 23:34:31.523783  426190 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-266395 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-266395 -n multinode-266395
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-266395 -n multinode-266395: exit status 3 (3.191479128s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:34:34.883737  426300 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0108 23:34:34.883760  426300 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-266395" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.55s)

                                                
                                    
x
+
TestPreload (226.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-320518 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 23:44:16.726341  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:44:19.627551  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-320518 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m19.070033204s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-320518 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-320518
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-320518: (7.108508405s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-320518 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0108 23:45:49.610573  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:46:13.678619  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-320518 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.038495171s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-320518 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2024-01-08 23:46:30.968275381 +0000 UTC m=+3289.845225589
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-320518 -n test-preload-320518
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-320518 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-320518 logs -n 25: (1.167129531s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n multinode-266395 sudo cat                                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /home/docker/cp-test_multinode-266395-m03_multinode-266395.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt                       | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m02:/home/docker/cp-test_multinode-266395-m03_multinode-266395-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n                                                                 | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | multinode-266395-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-266395 ssh -n multinode-266395-m02 sudo cat                                   | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | /home/docker/cp-test_multinode-266395-m03_multinode-266395-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-266395 node stop m03                                                          | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	| node    | multinode-266395 node start                                                             | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC | 08 Jan 24 23:20 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-266395                                                                | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC |                     |
	| stop    | -p multinode-266395                                                                     | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:20 UTC |                     |
	| start   | -p multinode-266395                                                                     | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:22 UTC | 08 Jan 24 23:32 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-266395                                                                | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:32 UTC |                     |
	| node    | multinode-266395 node delete                                                            | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:32 UTC | 08 Jan 24 23:32 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-266395 stop                                                                   | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:32 UTC |                     |
	| start   | -p multinode-266395                                                                     | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:34 UTC | 08 Jan 24 23:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-266395                                                                | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:41 UTC |                     |
	| start   | -p multinode-266395-m02                                                                 | multinode-266395-m02 | jenkins | v1.32.0 | 08 Jan 24 23:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-266395-m03                                                                 | multinode-266395-m03 | jenkins | v1.32.0 | 08 Jan 24 23:41 UTC | 08 Jan 24 23:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-266395                                                                 | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:42 UTC |                     |
	| delete  | -p multinode-266395-m03                                                                 | multinode-266395-m03 | jenkins | v1.32.0 | 08 Jan 24 23:42 UTC | 08 Jan 24 23:42 UTC |
	| delete  | -p multinode-266395                                                                     | multinode-266395     | jenkins | v1.32.0 | 08 Jan 24 23:42 UTC | 08 Jan 24 23:42 UTC |
	| start   | -p test-preload-320518                                                                  | test-preload-320518  | jenkins | v1.32.0 | 08 Jan 24 23:42 UTC | 08 Jan 24 23:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-320518 image pull                                                          | test-preload-320518  | jenkins | v1.32.0 | 08 Jan 24 23:45 UTC | 08 Jan 24 23:45 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-320518                                                                  | test-preload-320518  | jenkins | v1.32.0 | 08 Jan 24 23:45 UTC | 08 Jan 24 23:45 UTC |
	| start   | -p test-preload-320518                                                                  | test-preload-320518  | jenkins | v1.32.0 | 08 Jan 24 23:45 UTC | 08 Jan 24 23:46 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-320518 image list                                                          | test-preload-320518  | jenkins | v1.32.0 | 08 Jan 24 23:46 UTC | 08 Jan 24 23:46 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 23:45:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 23:45:14.744517  429005 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:45:14.744676  429005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:45:14.744686  429005 out.go:309] Setting ErrFile to fd 2...
	I0108 23:45:14.744690  429005 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:45:14.744926  429005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:45:14.745471  429005 out.go:303] Setting JSON to false
	I0108 23:45:14.746453  429005 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16041,"bootTime":1704741474,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:45:14.746519  429005 start.go:138] virtualization: kvm guest
	I0108 23:45:14.750118  429005 out.go:177] * [test-preload-320518] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:45:14.751738  429005 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:45:14.751735  429005 notify.go:220] Checking for updates...
	I0108 23:45:14.753296  429005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:45:14.754712  429005 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:45:14.756237  429005 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:45:14.757661  429005 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:45:14.758965  429005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:45:14.760616  429005 config.go:182] Loaded profile config "test-preload-320518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0108 23:45:14.761029  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:45:14.761094  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:45:14.775648  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0108 23:45:14.776118  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:45:14.776661  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:45:14.776688  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:45:14.776994  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:45:14.777189  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:14.779050  429005 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:45:14.780619  429005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:45:14.780951  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:45:14.781001  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:45:14.794997  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0108 23:45:14.795385  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:45:14.795853  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:45:14.795881  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:45:14.796200  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:45:14.796388  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:14.831138  429005 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 23:45:14.832555  429005 start.go:298] selected driver: kvm2
	I0108 23:45:14.832571  429005 start.go:902] validating driver "kvm2" against &{Name:test-preload-320518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-320518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:45:14.832689  429005 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:45:14.833971  429005 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:45:14.834099  429005 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 23:45:14.849870  429005 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 23:45:14.850204  429005 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 23:45:14.850262  429005 cni.go:84] Creating CNI manager for ""
	I0108 23:45:14.850272  429005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 23:45:14.850285  429005 start_flags.go:323] config:
	{Name:test-preload-320518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-320518 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:45:14.850449  429005 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:45:14.852476  429005 out.go:177] * Starting control plane node test-preload-320518 in cluster test-preload-320518
	I0108 23:45:14.853837  429005 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0108 23:45:14.884042  429005 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0108 23:45:14.884088  429005 cache.go:56] Caching tarball of preloaded images
	I0108 23:45:14.884246  429005 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0108 23:45:14.886272  429005 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0108 23:45:14.887583  429005 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:45:14.916058  429005 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0108 23:45:21.077536  429005 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:45:21.077637  429005 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 23:45:21.995449  429005 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0108 23:45:21.995634  429005 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/config.json ...
	I0108 23:45:21.995868  429005 start.go:365] acquiring machines lock for test-preload-320518: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:45:21.995935  429005 start.go:369] acquired machines lock for "test-preload-320518" in 45.871µs
	I0108 23:45:21.995951  429005 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:45:21.995957  429005 fix.go:54] fixHost starting: 
	I0108 23:45:21.996228  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:45:21.996264  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:45:22.010656  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45359
	I0108 23:45:22.011148  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:45:22.011633  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:45:22.011659  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:45:22.012052  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:45:22.012270  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:22.012443  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetState
	I0108 23:45:22.014099  429005 fix.go:102] recreateIfNeeded on test-preload-320518: state=Stopped err=<nil>
	I0108 23:45:22.014123  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	W0108 23:45:22.014275  429005 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:45:22.017381  429005 out.go:177] * Restarting existing kvm2 VM for "test-preload-320518" ...
	I0108 23:45:22.018680  429005 main.go:141] libmachine: (test-preload-320518) Calling .Start
	I0108 23:45:22.018839  429005 main.go:141] libmachine: (test-preload-320518) Ensuring networks are active...
	I0108 23:45:22.019575  429005 main.go:141] libmachine: (test-preload-320518) Ensuring network default is active
	I0108 23:45:22.019912  429005 main.go:141] libmachine: (test-preload-320518) Ensuring network mk-test-preload-320518 is active
	I0108 23:45:22.020230  429005 main.go:141] libmachine: (test-preload-320518) Getting domain xml...
	I0108 23:45:22.021210  429005 main.go:141] libmachine: (test-preload-320518) Creating domain...
	I0108 23:45:23.240859  429005 main.go:141] libmachine: (test-preload-320518) Waiting to get IP...
	I0108 23:45:23.241713  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:23.242156  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:23.242306  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:23.242178  429051 retry.go:31] will retry after 207.470451ms: waiting for machine to come up
	I0108 23:45:23.451838  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:23.452276  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:23.452304  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:23.452232  429051 retry.go:31] will retry after 371.56979ms: waiting for machine to come up
	I0108 23:45:23.826048  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:23.826654  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:23.826699  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:23.826604  429051 retry.go:31] will retry after 418.46694ms: waiting for machine to come up
	I0108 23:45:24.246136  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:24.246629  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:24.246677  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:24.246579  429051 retry.go:31] will retry after 449.713466ms: waiting for machine to come up
	I0108 23:45:24.698293  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:24.698734  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:24.698774  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:24.698667  429051 retry.go:31] will retry after 555.877451ms: waiting for machine to come up
	I0108 23:45:25.256484  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:25.256900  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:25.256934  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:25.256865  429051 retry.go:31] will retry after 620.481229ms: waiting for machine to come up
	I0108 23:45:25.878737  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:25.879229  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:25.879255  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:25.879176  429051 retry.go:31] will retry after 1.155917611s: waiting for machine to come up
	I0108 23:45:27.036978  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:27.037486  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:27.037522  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:27.037405  429051 retry.go:31] will retry after 1.274590041s: waiting for machine to come up
	I0108 23:45:28.314005  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:28.314446  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:28.314473  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:28.314395  429051 retry.go:31] will retry after 1.73305906s: waiting for machine to come up
	I0108 23:45:30.050533  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:30.050980  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:30.051043  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:30.050959  429051 retry.go:31] will retry after 1.897709315s: waiting for machine to come up
	I0108 23:45:31.950877  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:31.951420  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:31.951457  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:31.951325  429051 retry.go:31] will retry after 1.930865426s: waiting for machine to come up
	I0108 23:45:33.885102  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:33.885500  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:33.885531  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:33.885442  429051 retry.go:31] will retry after 2.41171024s: waiting for machine to come up
	I0108 23:45:36.300019  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:36.300359  429005 main.go:141] libmachine: (test-preload-320518) DBG | unable to find current IP address of domain test-preload-320518 in network mk-test-preload-320518
	I0108 23:45:36.300396  429005 main.go:141] libmachine: (test-preload-320518) DBG | I0108 23:45:36.300304  429051 retry.go:31] will retry after 4.194080052s: waiting for machine to come up
	I0108 23:45:40.497983  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.498401  429005 main.go:141] libmachine: (test-preload-320518) Found IP for machine: 192.168.39.60
	I0108 23:45:40.498459  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has current primary IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.498471  429005 main.go:141] libmachine: (test-preload-320518) Reserving static IP address...
	I0108 23:45:40.498839  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "test-preload-320518", mac: "52:54:00:5e:44:cc", ip: "192.168.39.60"} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.498875  429005 main.go:141] libmachine: (test-preload-320518) DBG | skip adding static IP to network mk-test-preload-320518 - found existing host DHCP lease matching {name: "test-preload-320518", mac: "52:54:00:5e:44:cc", ip: "192.168.39.60"}
	I0108 23:45:40.498891  429005 main.go:141] libmachine: (test-preload-320518) Reserved static IP address: 192.168.39.60
	I0108 23:45:40.498911  429005 main.go:141] libmachine: (test-preload-320518) Waiting for SSH to be available...
	I0108 23:45:40.498930  429005 main.go:141] libmachine: (test-preload-320518) DBG | Getting to WaitForSSH function...
	I0108 23:45:40.501057  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.501394  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.501463  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.501527  429005 main.go:141] libmachine: (test-preload-320518) DBG | Using SSH client type: external
	I0108 23:45:40.501557  429005 main.go:141] libmachine: (test-preload-320518) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa (-rw-------)
	I0108 23:45:40.501598  429005 main.go:141] libmachine: (test-preload-320518) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.60 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 23:45:40.501612  429005 main.go:141] libmachine: (test-preload-320518) DBG | About to run SSH command:
	I0108 23:45:40.501622  429005 main.go:141] libmachine: (test-preload-320518) DBG | exit 0
	I0108 23:45:40.587741  429005 main.go:141] libmachine: (test-preload-320518) DBG | SSH cmd err, output: <nil>: 
	I0108 23:45:40.588128  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetConfigRaw
	I0108 23:45:40.588814  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetIP
	I0108 23:45:40.591567  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.592036  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.592085  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.592327  429005 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/config.json ...
	I0108 23:45:40.592576  429005 machine.go:88] provisioning docker machine ...
	I0108 23:45:40.592599  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:40.592823  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetMachineName
	I0108 23:45:40.592998  429005 buildroot.go:166] provisioning hostname "test-preload-320518"
	I0108 23:45:40.593020  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetMachineName
	I0108 23:45:40.593142  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:40.595457  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.595749  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.595785  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.595854  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:40.596020  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:40.596230  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:40.596360  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:40.596521  429005 main.go:141] libmachine: Using SSH client type: native
	I0108 23:45:40.596875  429005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0108 23:45:40.596889  429005 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-320518 && echo "test-preload-320518" | sudo tee /etc/hostname
	I0108 23:45:40.724613  429005 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-320518
	
	I0108 23:45:40.724657  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:40.727435  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.727820  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.727853  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.728019  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:40.728235  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:40.728401  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:40.728517  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:40.728670  429005 main.go:141] libmachine: Using SSH client type: native
	I0108 23:45:40.729136  429005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0108 23:45:40.729168  429005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-320518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-320518/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-320518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:45:40.853339  429005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:45:40.853404  429005 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:45:40.853424  429005 buildroot.go:174] setting up certificates
	I0108 23:45:40.853438  429005 provision.go:83] configureAuth start
	I0108 23:45:40.853451  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetMachineName
	I0108 23:45:40.853746  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetIP
	I0108 23:45:40.856341  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.856688  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.856717  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.856895  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:40.858970  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.859253  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:40.859286  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:40.859416  429005 provision.go:138] copyHostCerts
	I0108 23:45:40.859494  429005 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:45:40.859508  429005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:45:40.859581  429005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:45:40.859682  429005 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:45:40.859694  429005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:45:40.859718  429005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:45:40.859783  429005 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:45:40.859790  429005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:45:40.859808  429005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:45:40.859865  429005 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.test-preload-320518 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube test-preload-320518]
	I0108 23:45:40.999734  429005 provision.go:172] copyRemoteCerts
	I0108 23:45:40.999802  429005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:45:40.999826  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:41.002577  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.002912  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.002946  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.003137  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:41.003334  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.003475  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:41.003592  429005 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa Username:docker}
	I0108 23:45:41.088384  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:45:41.111029  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0108 23:45:41.132940  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:45:41.154611  429005 provision.go:86] duration metric: configureAuth took 301.157064ms
	I0108 23:45:41.154638  429005 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:45:41.154803  429005 config.go:182] Loaded profile config "test-preload-320518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0108 23:45:41.154903  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:41.157392  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.157745  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.157776  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.158052  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:41.158248  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.158393  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.158559  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:41.158727  429005 main.go:141] libmachine: Using SSH client type: native
	I0108 23:45:41.159033  429005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0108 23:45:41.159048  429005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:45:41.469794  429005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:45:41.469825  429005 machine.go:91] provisioned docker machine in 877.232993ms
	I0108 23:45:41.469859  429005 start.go:300] post-start starting for "test-preload-320518" (driver="kvm2")
	I0108 23:45:41.469873  429005 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:45:41.469895  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:41.470289  429005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:45:41.470328  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:41.473112  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.473506  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.473540  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.473645  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:41.473828  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.473991  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:41.474178  429005 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa Username:docker}
	I0108 23:45:41.561458  429005 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:45:41.565669  429005 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 23:45:41.565693  429005 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:45:41.565764  429005 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:45:41.565835  429005 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:45:41.565916  429005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:45:41.573667  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:45:41.595833  429005 start.go:303] post-start completed in 125.956347ms
	I0108 23:45:41.595855  429005 fix.go:56] fixHost completed within 19.599898234s
	I0108 23:45:41.595875  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:41.598347  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.598631  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.598669  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.598846  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:41.599049  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.599230  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.599395  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:41.599597  429005 main.go:141] libmachine: Using SSH client type: native
	I0108 23:45:41.599915  429005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0108 23:45:41.599927  429005 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 23:45:41.711965  429005 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704757541.660733678
	
	I0108 23:45:41.711994  429005 fix.go:206] guest clock: 1704757541.660733678
	I0108 23:45:41.712001  429005 fix.go:219] Guest: 2024-01-08 23:45:41.660733678 +0000 UTC Remote: 2024-01-08 23:45:41.595858957 +0000 UTC m=+26.901498432 (delta=64.874721ms)
	I0108 23:45:41.712019  429005 fix.go:190] guest clock delta is within tolerance: 64.874721ms
	I0108 23:45:41.712023  429005 start.go:83] releasing machines lock for "test-preload-320518", held for 19.716078447s
	I0108 23:45:41.712041  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:41.712318  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetIP
	I0108 23:45:41.714911  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.715264  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.715295  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.715415  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:41.715920  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:41.716079  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:45:41.716198  429005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:45:41.716243  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:41.716299  429005 ssh_runner.go:195] Run: cat /version.json
	I0108 23:45:41.716326  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:45:41.718512  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.718892  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.718943  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.718973  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.719028  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:41.719222  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:41.719249  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:41.719251  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.719418  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:45:41.719442  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:41.719667  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:45:41.719674  429005 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa Username:docker}
	I0108 23:45:41.719818  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:45:41.719953  429005 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa Username:docker}
	I0108 23:45:41.800996  429005 ssh_runner.go:195] Run: systemctl --version
	I0108 23:45:41.831221  429005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:45:41.984848  429005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 23:45:41.991345  429005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:45:41.991425  429005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:45:42.005230  429005 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:45:42.005250  429005 start.go:475] detecting cgroup driver to use...
	I0108 23:45:42.005340  429005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:45:42.017634  429005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:45:42.029322  429005 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:45:42.029379  429005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:45:42.040960  429005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:45:42.052411  429005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 23:45:42.151178  429005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:45:42.267258  429005 docker.go:219] disabling docker service ...
	I0108 23:45:42.267337  429005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:45:42.281614  429005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:45:42.293955  429005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:45:42.401369  429005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:45:42.512967  429005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:45:42.526004  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:45:42.542895  429005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0108 23:45:42.542957  429005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:45:42.552050  429005 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 23:45:42.552115  429005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:45:42.561065  429005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:45:42.570031  429005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:45:42.578967  429005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 23:45:42.588249  429005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 23:45:42.596453  429005 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 23:45:42.596516  429005 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 23:45:42.608398  429005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 23:45:42.616692  429005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 23:45:42.718954  429005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 23:45:42.880569  429005 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 23:45:42.880670  429005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 23:45:42.885779  429005 start.go:543] Will wait 60s for crictl version
	I0108 23:45:42.885844  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:42.891071  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 23:45:42.928645  429005 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 23:45:42.928747  429005 ssh_runner.go:195] Run: crio --version
	I0108 23:45:42.979954  429005 ssh_runner.go:195] Run: crio --version
	I0108 23:45:43.027821  429005 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0108 23:45:43.029644  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetIP
	I0108 23:45:43.032279  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:43.032537  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:45:43.032573  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:45:43.032767  429005 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 23:45:43.037008  429005 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:45:43.049901  429005 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0108 23:45:43.049979  429005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:45:43.092747  429005 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0108 23:45:43.092815  429005 ssh_runner.go:195] Run: which lz4
	I0108 23:45:43.096922  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 23:45:43.101164  429005 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 23:45:43.101208  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0108 23:45:44.905467  429005 crio.go:444] Took 1.808599 seconds to copy over tarball
	I0108 23:45:44.905557  429005 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 23:45:47.747843  429005 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.842253152s)
	I0108 23:45:47.747875  429005 crio.go:451] Took 2.842373 seconds to extract the tarball
	I0108 23:45:47.747887  429005 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 23:45:47.787342  429005 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 23:45:47.834073  429005 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0108 23:45:47.834107  429005 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 23:45:47.834167  429005 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:45:47.834201  429005 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0108 23:45:47.834240  429005 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0108 23:45:47.834253  429005 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0108 23:45:47.834296  429005 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0108 23:45:47.834403  429005 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0108 23:45:47.834463  429005 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0108 23:45:47.834502  429005 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0108 23:45:47.835585  429005 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0108 23:45:47.835649  429005 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0108 23:45:47.835585  429005 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0108 23:45:47.835588  429005 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0108 23:45:47.835587  429005 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:45:47.835586  429005 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0108 23:45:47.835584  429005 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0108 23:45:47.835585  429005 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0108 23:45:48.000605  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0108 23:45:48.015072  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0108 23:45:48.024553  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0108 23:45:48.025504  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:45:48.051716  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0108 23:45:48.056782  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0108 23:45:48.056975  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0108 23:45:48.070050  429005 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0108 23:45:48.070097  429005 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0108 23:45:48.070145  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.070630  429005 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0108 23:45:48.160887  429005 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0108 23:45:48.160946  429005 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0108 23:45:48.160950  429005 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0108 23:45:48.160985  429005 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0108 23:45:48.161050  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.160998  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.251916  429005 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0108 23:45:48.251969  429005 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0108 23:45:48.252004  429005 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0108 23:45:48.252017  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.252043  429005 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0108 23:45:48.252066  429005 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0108 23:45:48.252043  429005 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0108 23:45:48.252126  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.252144  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0108 23:45:48.252127  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.252186  429005 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0108 23:45:48.252214  429005 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0108 23:45:48.252226  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0108 23:45:48.252248  429005 ssh_runner.go:195] Run: which crictl
	I0108 23:45:48.252286  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0108 23:45:48.267222  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0108 23:45:48.267276  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0108 23:45:48.375736  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0108 23:45:48.375805  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0108 23:45:48.375849  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0108 23:45:48.375911  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0108 23:45:48.375932  429005 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0108 23:45:48.375938  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0108 23:45:48.376009  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0108 23:45:48.376032  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0108 23:45:48.376009  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0108 23:45:48.376104  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0108 23:45:48.380290  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0108 23:45:48.380380  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0108 23:45:48.438449  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0108 23:45:48.438563  429005 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0108 23:45:48.438617  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0108 23:45:48.438632  429005 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0108 23:45:48.438647  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0108 23:45:48.438657  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0108 23:45:48.438571  429005 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0108 23:45:48.438451  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0108 23:45:48.438711  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0108 23:45:48.438737  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0108 23:45:48.438751  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0108 23:45:48.448859  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0108 23:45:50.799700  429005 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.361020816s)
	I0108 23:45:50.799747  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0108 23:45:50.799747  429005 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.361060552s)
	I0108 23:45:50.799781  429005 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0108 23:45:50.799786  429005 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0108 23:45:50.799858  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0108 23:45:50.938044  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0108 23:45:50.938110  429005 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0108 23:45:50.938169  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0108 23:45:51.394058  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0108 23:45:51.394113  429005 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0108 23:45:51.394173  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0108 23:45:53.643404  429005 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.249199968s)
	I0108 23:45:53.643450  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0108 23:45:53.643482  429005 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0108 23:45:53.643542  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0108 23:45:54.385981  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0108 23:45:54.386045  429005 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0108 23:45:54.386122  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0108 23:45:55.230030  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0108 23:45:55.230082  429005 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0108 23:45:55.230135  429005 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0108 23:45:55.975070  429005 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0108 23:45:55.975121  429005 cache_images.go:123] Successfully loaded all cached images
	I0108 23:45:55.975126  429005 cache_images.go:92] LoadImages completed in 8.141006351s
	I0108 23:45:55.975224  429005 ssh_runner.go:195] Run: crio config
	I0108 23:45:56.031836  429005 cni.go:84] Creating CNI manager for ""
	I0108 23:45:56.031868  429005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 23:45:56.031904  429005 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 23:45:56.031954  429005 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-320518 NodeName:test-preload-320518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.60"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 23:45:56.032121  429005 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-320518"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.60"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 23:45:56.032199  429005 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-320518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-320518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 23:45:56.032251  429005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0108 23:45:56.041058  429005 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 23:45:56.041136  429005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 23:45:56.049151  429005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 23:45:56.065121  429005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 23:45:56.080819  429005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0108 23:45:56.097257  429005 ssh_runner.go:195] Run: grep 192.168.39.60	control-plane.minikube.internal$ /etc/hosts
	I0108 23:45:56.101096  429005 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.60	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 23:45:56.113560  429005 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518 for IP: 192.168.39.60
	I0108 23:45:56.113587  429005 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:45:56.113745  429005 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0108 23:45:56.113784  429005 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0108 23:45:56.113853  429005 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.key
	I0108 23:45:56.113908  429005 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/apiserver.key.a014e791
	I0108 23:45:56.113949  429005 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/proxy-client.key
	I0108 23:45:56.114094  429005 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0108 23:45:56.114121  429005 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0108 23:45:56.114128  429005 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 23:45:56.114152  429005 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0108 23:45:56.114201  429005 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0108 23:45:56.114222  429005 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0108 23:45:56.114258  429005 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:45:56.114863  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 23:45:56.138325  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 23:45:56.162175  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 23:45:56.185782  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 23:45:56.209124  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 23:45:56.232343  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 23:45:56.255326  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 23:45:56.277698  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 23:45:56.300644  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0108 23:45:56.322891  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 23:45:56.344717  429005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0108 23:45:56.366898  429005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 23:45:56.383111  429005 ssh_runner.go:195] Run: openssl version
	I0108 23:45:56.388651  429005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0108 23:45:56.397712  429005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0108 23:45:56.402011  429005 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0108 23:45:56.402069  429005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0108 23:45:56.407367  429005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0108 23:45:56.416371  429005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0108 23:45:56.425476  429005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0108 23:45:56.429928  429005 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0108 23:45:56.429977  429005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0108 23:45:56.435234  429005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 23:45:56.444478  429005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 23:45:56.453792  429005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:45:56.458203  429005 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:45:56.458255  429005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 23:45:56.463812  429005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 23:45:56.473127  429005 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 23:45:56.477289  429005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 23:45:56.483170  429005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 23:45:56.488822  429005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 23:45:56.494410  429005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 23:45:56.500108  429005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 23:45:56.505778  429005 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 23:45:56.511502  429005 kubeadm.go:404] StartCluster: {Name:test-preload-320518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-320518 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:45:56.511618  429005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 23:45:56.511672  429005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:45:56.548551  429005 cri.go:89] found id: ""
	I0108 23:45:56.548651  429005 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 23:45:56.557774  429005 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 23:45:56.557804  429005 kubeadm.go:636] restartCluster start
	I0108 23:45:56.557887  429005 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 23:45:56.566130  429005 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:56.566609  429005 kubeconfig.go:135] verify returned: extract IP: "test-preload-320518" does not appear in /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:45:56.566721  429005 kubeconfig.go:146] "test-preload-320518" context is missing from /home/jenkins/minikube-integration/17830-399915/kubeconfig - will repair!
	I0108 23:45:56.567029  429005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:45:56.567655  429005 kapi.go:59] client config for test-preload-320518: &rest.Config{Host:"https://192.168.39.60:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:45:56.568482  429005 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 23:45:56.576465  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:56.576517  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:56.586754  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:57.076793  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:57.076893  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:57.088090  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:57.576675  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:57.576788  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:57.588012  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:58.076995  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:58.077094  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:58.088597  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:58.577242  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:58.577342  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:58.588596  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:59.077238  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:59.077370  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:59.088273  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:45:59.576854  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:45:59.576938  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:45:59.589492  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:00.077164  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:00.077257  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:00.088572  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:00.577240  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:00.577334  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:00.588382  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:01.076909  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:01.077014  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:01.088468  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:01.577021  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:01.577130  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:01.588658  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:02.076774  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:02.076868  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:02.087608  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:02.577284  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:02.577377  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:02.588278  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:03.076804  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:03.076965  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:03.088025  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:03.576564  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:03.576662  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:03.588602  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:04.077221  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:04.077334  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:04.088437  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:04.576950  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:04.577055  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:04.587967  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:05.077050  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:05.077133  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:05.089150  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:05.576733  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:05.576860  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:05.589103  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:06.076631  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:06.076733  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:06.088403  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:06.577273  429005 api_server.go:166] Checking apiserver status ...
	I0108 23:46:06.577363  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 23:46:06.589330  429005 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 23:46:06.589362  429005 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 23:46:06.589410  429005 kubeadm.go:1135] stopping kube-system containers ...
	I0108 23:46:06.589425  429005 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 23:46:06.589474  429005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 23:46:06.625607  429005 cri.go:89] found id: ""
	I0108 23:46:06.625677  429005 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 23:46:06.641864  429005 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 23:46:06.651139  429005 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 23:46:06.651203  429005 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 23:46:06.660254  429005 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 23:46:06.660280  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:46:06.762257  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:46:07.414220  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:46:07.752987  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:46:07.837322  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:46:07.939056  429005 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:46:07.939157  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:08.439374  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:08.940033  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:09.439581  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:09.939801  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:10.440307  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:10.462840  429005 api_server.go:72] duration metric: took 2.523784301s to wait for apiserver process to appear ...
	I0108 23:46:10.462874  429005 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:46:10.462913  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:10.463483  429005 api_server.go:269] stopped: https://192.168.39.60:8443/healthz: Get "https://192.168.39.60:8443/healthz": dial tcp 192.168.39.60:8443: connect: connection refused
	I0108 23:46:10.963341  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:14.771224  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 23:46:14.771256  429005 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 23:46:14.771273  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:14.785867  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 23:46:14.785894  429005 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 23:46:14.963087  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:14.969617  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 23:46:14.969651  429005 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 23:46:15.463164  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:15.469712  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 23:46:15.469752  429005 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 23:46:15.963274  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:15.973570  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0108 23:46:15.973613  429005 api_server.go:103] status: https://192.168.39.60:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0108 23:46:16.463081  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:16.469311  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0108 23:46:16.476942  429005 api_server.go:141] control plane version: v1.24.4
	I0108 23:46:16.476979  429005 api_server.go:131] duration metric: took 6.01409295s to wait for apiserver health ...
	I0108 23:46:16.476994  429005 cni.go:84] Creating CNI manager for ""
	I0108 23:46:16.477003  429005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 23:46:16.479118  429005 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 23:46:16.480654  429005 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 23:46:16.495223  429005 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 23:46:16.527755  429005 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:46:16.539263  429005 system_pods.go:59] 7 kube-system pods found
	I0108 23:46:16.539299  429005 system_pods.go:61] "coredns-6d4b75cb6d-6d2mc" [960f62a3-8c71-409b-a88a-ce556560a9a9] Running
	I0108 23:46:16.539311  429005 system_pods.go:61] "etcd-test-preload-320518" [dd3d217e-ce22-4733-976d-1785301606af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 23:46:16.539321  429005 system_pods.go:61] "kube-apiserver-test-preload-320518" [6c225e0e-9f15-43b3-9aeb-0cf35a314b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 23:46:16.539332  429005 system_pods.go:61] "kube-controller-manager-test-preload-320518" [088d3e19-830b-491b-aeaa-0124b39cc311] Running
	I0108 23:46:16.539345  429005 system_pods.go:61] "kube-proxy-854jw" [9828e7d7-f559-4d94-8f17-cc970bda8dfd] Running
	I0108 23:46:16.539354  429005 system_pods.go:61] "kube-scheduler-test-preload-320518" [d9dca99d-7c5b-4d41-8cfc-4b28ffd19a1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 23:46:16.539376  429005 system_pods.go:61] "storage-provisioner" [b3fab4a7-6048-4f1b-bc09-0f520cb5425d] Running
	I0108 23:46:16.539390  429005 system_pods.go:74] duration metric: took 11.607598ms to wait for pod list to return data ...
	I0108 23:46:16.539403  429005 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:46:16.546664  429005 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:46:16.546692  429005 node_conditions.go:123] node cpu capacity is 2
	I0108 23:46:16.546702  429005 node_conditions.go:105] duration metric: took 7.291075ms to run NodePressure ...
	I0108 23:46:16.546720  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 23:46:16.850981  429005 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 23:46:16.857112  429005 kubeadm.go:787] kubelet initialised
	I0108 23:46:16.857134  429005 kubeadm.go:788] duration metric: took 6.118158ms waiting for restarted kubelet to initialise ...
	I0108 23:46:16.857142  429005 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:46:16.862436  429005 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:16.870017  429005 pod_ready.go:97] node "test-preload-320518" hosting pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.870045  429005 pod_ready.go:81] duration metric: took 7.584052ms waiting for pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace to be "Ready" ...
	E0108 23:46:16.870054  429005 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-320518" hosting pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.870060  429005 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:16.876451  429005 pod_ready.go:97] node "test-preload-320518" hosting pod "etcd-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.876474  429005 pod_ready.go:81] duration metric: took 6.405244ms waiting for pod "etcd-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	E0108 23:46:16.876482  429005 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-320518" hosting pod "etcd-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.876487  429005 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:16.886931  429005 pod_ready.go:97] node "test-preload-320518" hosting pod "kube-apiserver-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.886955  429005 pod_ready.go:81] duration metric: took 10.459778ms waiting for pod "kube-apiserver-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	E0108 23:46:16.886964  429005 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-320518" hosting pod "kube-apiserver-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.886970  429005 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:16.933943  429005 pod_ready.go:97] node "test-preload-320518" hosting pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.933975  429005 pod_ready.go:81] duration metric: took 46.99598ms waiting for pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	E0108 23:46:16.933985  429005 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-320518" hosting pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:16.933991  429005 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-854jw" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:17.331334  429005 pod_ready.go:97] node "test-preload-320518" hosting pod "kube-proxy-854jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:17.331385  429005 pod_ready.go:81] duration metric: took 397.38317ms waiting for pod "kube-proxy-854jw" in "kube-system" namespace to be "Ready" ...
	E0108 23:46:17.331399  429005 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-320518" hosting pod "kube-proxy-854jw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:17.331406  429005 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:17.732056  429005 pod_ready.go:97] node "test-preload-320518" hosting pod "kube-scheduler-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:17.732100  429005 pod_ready.go:81] duration metric: took 400.68252ms waiting for pod "kube-scheduler-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	E0108 23:46:17.732110  429005 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-320518" hosting pod "kube-scheduler-test-preload-320518" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:17.732117  429005 pod_ready.go:38] duration metric: took 874.96683ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:46:17.732134  429005 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 23:46:17.743240  429005 ops.go:34] apiserver oom_adj: -16
	I0108 23:46:17.743270  429005 kubeadm.go:640] restartCluster took 21.185456201s
	I0108 23:46:17.743280  429005 kubeadm.go:406] StartCluster complete in 21.231788628s
	I0108 23:46:17.743305  429005 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:46:17.743399  429005 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:46:17.744110  429005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 23:46:17.744346  429005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 23:46:17.744500  429005 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 23:46:17.744601  429005 addons.go:69] Setting storage-provisioner=true in profile "test-preload-320518"
	I0108 23:46:17.744635  429005 config.go:182] Loaded profile config "test-preload-320518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0108 23:46:17.744644  429005 addons.go:237] Setting addon storage-provisioner=true in "test-preload-320518"
	W0108 23:46:17.744696  429005 addons.go:246] addon storage-provisioner should already be in state true
	I0108 23:46:17.744756  429005 host.go:66] Checking if "test-preload-320518" exists ...
	I0108 23:46:17.744617  429005 addons.go:69] Setting default-storageclass=true in profile "test-preload-320518"
	I0108 23:46:17.744823  429005 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-320518"
	I0108 23:46:17.744981  429005 kapi.go:59] client config for test-preload-320518: &rest.Config{Host:"https://192.168.39.60:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:46:17.745195  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:46:17.745238  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:46:17.745262  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:46:17.745310  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:46:17.748382  429005 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-320518" context rescaled to 1 replicas
	I0108 23:46:17.748422  429005 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.60 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 23:46:17.750346  429005 out.go:177] * Verifying Kubernetes components...
	I0108 23:46:17.751658  429005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:46:17.761474  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0108 23:46:17.761474  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I0108 23:46:17.761942  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:46:17.762015  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:46:17.762463  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:46:17.762482  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:46:17.762480  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:46:17.762500  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:46:17.762802  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:46:17.762822  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:46:17.763002  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetState
	I0108 23:46:17.763435  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:46:17.763487  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:46:17.765678  429005 kapi.go:59] client config for test-preload-320518: &rest.Config{Host:"https://192.168.39.60:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/profiles/test-preload-320518/client.key", CAFile:"/home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 23:46:17.766036  429005 addons.go:237] Setting addon default-storageclass=true in "test-preload-320518"
	W0108 23:46:17.766058  429005 addons.go:246] addon default-storageclass should already be in state true
	I0108 23:46:17.766093  429005 host.go:66] Checking if "test-preload-320518" exists ...
	I0108 23:46:17.766485  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:46:17.766523  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:46:17.779090  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I0108 23:46:17.779528  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:46:17.780103  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:46:17.780131  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:46:17.780526  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:46:17.780755  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetState
	I0108 23:46:17.782605  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:46:17.784743  429005 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 23:46:17.783793  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I0108 23:46:17.786157  429005 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:46:17.786179  429005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 23:46:17.786200  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:46:17.786448  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:46:17.786929  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:46:17.786963  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:46:17.787284  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:46:17.787865  429005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:46:17.787902  429005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:46:17.790668  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:46:17.791075  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:46:17.791108  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:46:17.791300  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:46:17.791524  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:46:17.791672  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:46:17.791834  429005 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa Username:docker}
	I0108 23:46:17.805860  429005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41595
	I0108 23:46:17.806258  429005 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:46:17.806742  429005 main.go:141] libmachine: Using API Version  1
	I0108 23:46:17.806773  429005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:46:17.807128  429005 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:46:17.807335  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetState
	I0108 23:46:17.808870  429005 main.go:141] libmachine: (test-preload-320518) Calling .DriverName
	I0108 23:46:17.809136  429005 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 23:46:17.809155  429005 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 23:46:17.809174  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHHostname
	I0108 23:46:17.812078  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:46:17.812551  429005 main.go:141] libmachine: (test-preload-320518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:44:cc", ip: ""} in network mk-test-preload-320518: {Iface:virbr1 ExpiryTime:2024-01-09 00:45:34 +0000 UTC Type:0 Mac:52:54:00:5e:44:cc Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:test-preload-320518 Clientid:01:52:54:00:5e:44:cc}
	I0108 23:46:17.812584  429005 main.go:141] libmachine: (test-preload-320518) DBG | domain test-preload-320518 has defined IP address 192.168.39.60 and MAC address 52:54:00:5e:44:cc in network mk-test-preload-320518
	I0108 23:46:17.812660  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHPort
	I0108 23:46:17.812829  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHKeyPath
	I0108 23:46:17.812979  429005 main.go:141] libmachine: (test-preload-320518) Calling .GetSSHUsername
	I0108 23:46:17.813141  429005 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/test-preload-320518/id_rsa Username:docker}
	I0108 23:46:17.907072  429005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 23:46:17.933739  429005 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 23:46:17.933760  429005 node_ready.go:35] waiting up to 6m0s for node "test-preload-320518" to be "Ready" ...
	I0108 23:46:17.956463  429005 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 23:46:18.802133  429005 main.go:141] libmachine: Making call to close driver server
	I0108 23:46:18.802167  429005 main.go:141] libmachine: (test-preload-320518) Calling .Close
	I0108 23:46:18.802168  429005 main.go:141] libmachine: Making call to close driver server
	I0108 23:46:18.802189  429005 main.go:141] libmachine: (test-preload-320518) Calling .Close
	I0108 23:46:18.802493  429005 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:46:18.802507  429005 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:46:18.802525  429005 main.go:141] libmachine: Making call to close driver server
	I0108 23:46:18.802536  429005 main.go:141] libmachine: (test-preload-320518) Calling .Close
	I0108 23:46:18.802622  429005 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:46:18.802670  429005 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:46:18.802686  429005 main.go:141] libmachine: Making call to close driver server
	I0108 23:46:18.802695  429005 main.go:141] libmachine: (test-preload-320518) Calling .Close
	I0108 23:46:18.802641  429005 main.go:141] libmachine: (test-preload-320518) DBG | Closing plugin on server side
	I0108 23:46:18.802780  429005 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:46:18.802782  429005 main.go:141] libmachine: (test-preload-320518) DBG | Closing plugin on server side
	I0108 23:46:18.802797  429005 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:46:18.802960  429005 main.go:141] libmachine: (test-preload-320518) DBG | Closing plugin on server side
	I0108 23:46:18.802982  429005 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:46:18.802995  429005 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:46:18.812098  429005 main.go:141] libmachine: Making call to close driver server
	I0108 23:46:18.812114  429005 main.go:141] libmachine: (test-preload-320518) Calling .Close
	I0108 23:46:18.812346  429005 main.go:141] libmachine: Successfully made call to close driver server
	I0108 23:46:18.812366  429005 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 23:46:18.814505  429005 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 23:46:18.815785  429005 addons.go:508] enable addons completed in 1.071293793s: enabled=[storage-provisioner default-storageclass]
	I0108 23:46:19.942433  429005 node_ready.go:58] node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:22.437518  429005 node_ready.go:58] node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:24.939351  429005 node_ready.go:58] node "test-preload-320518" has status "Ready":"False"
	I0108 23:46:25.438202  429005 node_ready.go:49] node "test-preload-320518" has status "Ready":"True"
	I0108 23:46:25.438232  429005 node_ready.go:38] duration metric: took 7.504446174s waiting for node "test-preload-320518" to be "Ready" ...
	I0108 23:46:25.438245  429005 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:46:25.443689  429005 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:25.451049  429005 pod_ready.go:92] pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace has status "Ready":"True"
	I0108 23:46:25.451076  429005 pod_ready.go:81] duration metric: took 7.359383ms waiting for pod "coredns-6d4b75cb6d-6d2mc" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:25.451087  429005 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:26.958057  429005 pod_ready.go:92] pod "etcd-test-preload-320518" in "kube-system" namespace has status "Ready":"True"
	I0108 23:46:26.958084  429005 pod_ready.go:81] duration metric: took 1.50698922s waiting for pod "etcd-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:26.958093  429005 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:28.464505  429005 pod_ready.go:92] pod "kube-apiserver-test-preload-320518" in "kube-system" namespace has status "Ready":"True"
	I0108 23:46:28.464533  429005 pod_ready.go:81] duration metric: took 1.506433721s waiting for pod "kube-apiserver-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:28.464545  429005 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:29.971398  429005 pod_ready.go:92] pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace has status "Ready":"True"
	I0108 23:46:29.971428  429005 pod_ready.go:81] duration metric: took 1.506876645s waiting for pod "kube-controller-manager-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:29.971437  429005 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-854jw" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:29.975914  429005 pod_ready.go:92] pod "kube-proxy-854jw" in "kube-system" namespace has status "Ready":"True"
	I0108 23:46:29.975937  429005 pod_ready.go:81] duration metric: took 4.493881ms waiting for pod "kube-proxy-854jw" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:29.975946  429005 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:29.979907  429005 pod_ready.go:92] pod "kube-scheduler-test-preload-320518" in "kube-system" namespace has status "Ready":"True"
	I0108 23:46:29.979926  429005 pod_ready.go:81] duration metric: took 3.973306ms waiting for pod "kube-scheduler-test-preload-320518" in "kube-system" namespace to be "Ready" ...
	I0108 23:46:29.979933  429005 pod_ready.go:38] duration metric: took 4.541677202s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 23:46:29.979948  429005 api_server.go:52] waiting for apiserver process to appear ...
	I0108 23:46:29.980033  429005 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:46:29.996871  429005 api_server.go:72] duration metric: took 12.24841456s to wait for apiserver process to appear ...
	I0108 23:46:29.996891  429005 api_server.go:88] waiting for apiserver healthz status ...
	I0108 23:46:29.996909  429005 api_server.go:253] Checking apiserver healthz at https://192.168.39.60:8443/healthz ...
	I0108 23:46:30.002192  429005 api_server.go:279] https://192.168.39.60:8443/healthz returned 200:
	ok
	I0108 23:46:30.003243  429005 api_server.go:141] control plane version: v1.24.4
	I0108 23:46:30.003270  429005 api_server.go:131] duration metric: took 6.371989ms to wait for apiserver health ...
	I0108 23:46:30.003280  429005 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 23:46:30.041241  429005 system_pods.go:59] 7 kube-system pods found
	I0108 23:46:30.041271  429005 system_pods.go:61] "coredns-6d4b75cb6d-6d2mc" [960f62a3-8c71-409b-a88a-ce556560a9a9] Running
	I0108 23:46:30.041277  429005 system_pods.go:61] "etcd-test-preload-320518" [dd3d217e-ce22-4733-976d-1785301606af] Running
	I0108 23:46:30.041288  429005 system_pods.go:61] "kube-apiserver-test-preload-320518" [6c225e0e-9f15-43b3-9aeb-0cf35a314b93] Running
	I0108 23:46:30.041294  429005 system_pods.go:61] "kube-controller-manager-test-preload-320518" [088d3e19-830b-491b-aeaa-0124b39cc311] Running
	I0108 23:46:30.041299  429005 system_pods.go:61] "kube-proxy-854jw" [9828e7d7-f559-4d94-8f17-cc970bda8dfd] Running
	I0108 23:46:30.041308  429005 system_pods.go:61] "kube-scheduler-test-preload-320518" [d9dca99d-7c5b-4d41-8cfc-4b28ffd19a1b] Running
	I0108 23:46:30.041313  429005 system_pods.go:61] "storage-provisioner" [b3fab4a7-6048-4f1b-bc09-0f520cb5425d] Running
	I0108 23:46:30.041321  429005 system_pods.go:74] duration metric: took 38.034391ms to wait for pod list to return data ...
	I0108 23:46:30.041334  429005 default_sa.go:34] waiting for default service account to be created ...
	I0108 23:46:30.238882  429005 default_sa.go:45] found service account: "default"
	I0108 23:46:30.238916  429005 default_sa.go:55] duration metric: took 197.574063ms for default service account to be created ...
	I0108 23:46:30.238927  429005 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 23:46:30.441635  429005 system_pods.go:86] 7 kube-system pods found
	I0108 23:46:30.441670  429005 system_pods.go:89] "coredns-6d4b75cb6d-6d2mc" [960f62a3-8c71-409b-a88a-ce556560a9a9] Running
	I0108 23:46:30.441683  429005 system_pods.go:89] "etcd-test-preload-320518" [dd3d217e-ce22-4733-976d-1785301606af] Running
	I0108 23:46:30.441690  429005 system_pods.go:89] "kube-apiserver-test-preload-320518" [6c225e0e-9f15-43b3-9aeb-0cf35a314b93] Running
	I0108 23:46:30.441696  429005 system_pods.go:89] "kube-controller-manager-test-preload-320518" [088d3e19-830b-491b-aeaa-0124b39cc311] Running
	I0108 23:46:30.441701  429005 system_pods.go:89] "kube-proxy-854jw" [9828e7d7-f559-4d94-8f17-cc970bda8dfd] Running
	I0108 23:46:30.441706  429005 system_pods.go:89] "kube-scheduler-test-preload-320518" [d9dca99d-7c5b-4d41-8cfc-4b28ffd19a1b] Running
	I0108 23:46:30.441712  429005 system_pods.go:89] "storage-provisioner" [b3fab4a7-6048-4f1b-bc09-0f520cb5425d] Running
	I0108 23:46:30.441720  429005 system_pods.go:126] duration metric: took 202.787721ms to wait for k8s-apps to be running ...
	I0108 23:46:30.441730  429005 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 23:46:30.441790  429005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:46:30.456334  429005 system_svc.go:56] duration metric: took 14.595959ms WaitForService to wait for kubelet.
	I0108 23:46:30.456363  429005 kubeadm.go:581] duration metric: took 12.707912201s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 23:46:30.456385  429005 node_conditions.go:102] verifying NodePressure condition ...
	I0108 23:46:30.638949  429005 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 23:46:30.638980  429005 node_conditions.go:123] node cpu capacity is 2
	I0108 23:46:30.638993  429005 node_conditions.go:105] duration metric: took 182.601743ms to run NodePressure ...
	I0108 23:46:30.639005  429005 start.go:228] waiting for startup goroutines ...
	I0108 23:46:30.639011  429005 start.go:233] waiting for cluster config update ...
	I0108 23:46:30.639021  429005 start.go:242] writing updated cluster config ...
	I0108 23:46:30.639299  429005 ssh_runner.go:195] Run: rm -f paused
	I0108 23:46:30.689779  429005 start.go:600] kubectl: 1.29.0, cluster: 1.24.4 (minor skew: 5)
	I0108 23:46:30.691830  429005 out.go:177] 
	W0108 23:46:30.693325  429005 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0108 23:46:30.694660  429005 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0108 23:46:30.696080  429005 out.go:177] * Done! kubectl is now configured to use "test-preload-320518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 23:45:33 UTC, ends at Mon 2024-01-08 23:46:31 UTC. --
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.657865194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704757591657841704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=86238cfd-3f01-48ad-ba55-a3de134468b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.663532265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de645103-0202-4a21-ae7f-b298e7ce26a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.663817203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de645103-0202-4a21-ae7f-b298e7ce26a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.664347029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:140aff0cdd9277ea770c6323ecd90ecb61121dbffb711201c1944b7f43bdce3f,PodSandboxId:280c9d244093b46427eecf44e0ad6c452997edf05b346d36d238e4a23a700c7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1704757583217964479,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6d2mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f62a3-8c71-409b-a88a-ce556560a9a9,},Annotations:map[string]string{io.kubernetes.container.hash: dc9f20cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b141a91105c79789432c478b1c912c43f3eb17518888ad36b319065644cbf733,PodSandboxId:1905ce1552a4d0160a0a47b415228918dfb116a379ce18b4678eb811e3bb6b7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704757576552252055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3fab4a7-6048-4f1b-bc09-0f520cb5425d,},Annotations:map[string]string{io.kubernetes.container.hash: b74cbc1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d786028dfd6c8ef6f8ce1bff68001910ec00191a4c6ccf5663f75dcd197143,PodSandboxId:95b66c07d8bcce6220e6dc8ca599ceb6ca1dd5d7e5aa12020551ada8542bb1de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1704757576342425028,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-854jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
828e7d7-f559-4d94-8f17-cc970bda8dfd,},Annotations:map[string]string{io.kubernetes.container.hash: c339c244,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9e28475ec7ed33d21301c90841d9db908b4d7017e22bf62f320c3a76b5ad8e,PodSandboxId:03fc019c7dc4f8c5d0987e502e79e3716f542f116c62095ea8d80c2659113f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1704757569251656262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e5bce84fc10e4f9f4a1718cd3986f1,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c136a8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfc4003631b69aed277449f5ec361116653bc86b513fbf4bd6fb0699b02560,PodSandboxId:aec8bd9b041715b592aa1f5a4fcd62a392eaf6c6d23455f7a8f1d09591a4d19c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1704757569295686073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767a3732ea7d2091c3e031a0d862e
14b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dffc050b4e7697f1c2aa559e53dc262d8656b8fbb262777e1c22bf224b74697,PodSandboxId:e216a37ef8f57e7e5177e8ba86a97b2e370945ada6e3ed9c66b45379f507adf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1704757569193638739,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caa289f0cccdc9307723772742b59ce7,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042225eff036e48382e2a024a12262a378e4d4a97369234640c2e2aaa42027d8,PodSandboxId:c6c497f124bcec9cc8b6e4e225349978d820ed912e267d2f61040f80ba773b48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1704757569017581804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037d041ad45088728f69e92136842c8b,},Annotations:map[string
]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de645103-0202-4a21-ae7f-b298e7ce26a4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.708339532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=67f041ec-9c41-4959-8b62-8c441738798c name=/runtime.v1.RuntimeService/Version
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.708427294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=67f041ec-9c41-4959-8b62-8c441738798c name=/runtime.v1.RuntimeService/Version
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.710531033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=38ba719d-119d-4418-bf22-725b1e534802 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.710997387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704757591710943388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=38ba719d-119d-4418-bf22-725b1e534802 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.711570838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e615a6d4-d9c7-4a8b-b751-1ca50740ae81 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.711642806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e615a6d4-d9c7-4a8b-b751-1ca50740ae81 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.711804949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:140aff0cdd9277ea770c6323ecd90ecb61121dbffb711201c1944b7f43bdce3f,PodSandboxId:280c9d244093b46427eecf44e0ad6c452997edf05b346d36d238e4a23a700c7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1704757583217964479,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6d2mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f62a3-8c71-409b-a88a-ce556560a9a9,},Annotations:map[string]string{io.kubernetes.container.hash: dc9f20cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b141a91105c79789432c478b1c912c43f3eb17518888ad36b319065644cbf733,PodSandboxId:1905ce1552a4d0160a0a47b415228918dfb116a379ce18b4678eb811e3bb6b7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704757576552252055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3fab4a7-6048-4f1b-bc09-0f520cb5425d,},Annotations:map[string]string{io.kubernetes.container.hash: b74cbc1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d786028dfd6c8ef6f8ce1bff68001910ec00191a4c6ccf5663f75dcd197143,PodSandboxId:95b66c07d8bcce6220e6dc8ca599ceb6ca1dd5d7e5aa12020551ada8542bb1de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1704757576342425028,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-854jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
828e7d7-f559-4d94-8f17-cc970bda8dfd,},Annotations:map[string]string{io.kubernetes.container.hash: c339c244,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9e28475ec7ed33d21301c90841d9db908b4d7017e22bf62f320c3a76b5ad8e,PodSandboxId:03fc019c7dc4f8c5d0987e502e79e3716f542f116c62095ea8d80c2659113f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1704757569251656262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e5bce84fc10e4f9f4a1718cd3986f1,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c136a8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfc4003631b69aed277449f5ec361116653bc86b513fbf4bd6fb0699b02560,PodSandboxId:aec8bd9b041715b592aa1f5a4fcd62a392eaf6c6d23455f7a8f1d09591a4d19c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1704757569295686073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767a3732ea7d2091c3e031a0d862e
14b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dffc050b4e7697f1c2aa559e53dc262d8656b8fbb262777e1c22bf224b74697,PodSandboxId:e216a37ef8f57e7e5177e8ba86a97b2e370945ada6e3ed9c66b45379f507adf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1704757569193638739,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caa289f0cccdc9307723772742b59ce7,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042225eff036e48382e2a024a12262a378e4d4a97369234640c2e2aaa42027d8,PodSandboxId:c6c497f124bcec9cc8b6e4e225349978d820ed912e267d2f61040f80ba773b48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1704757569017581804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037d041ad45088728f69e92136842c8b,},Annotations:map[string
]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e615a6d4-d9c7-4a8b-b751-1ca50740ae81 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.756053658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=08e059c5-7fe5-4d2b-a41d-9c7b8a531893 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.756149846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=08e059c5-7fe5-4d2b-a41d-9c7b8a531893 name=/runtime.v1.RuntimeService/Version
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.759155841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=46152315-2ef9-4669-bf2c-41a00723f4cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.759686831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704757591759672882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=46152315-2ef9-4669-bf2c-41a00723f4cf name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.760692319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7e86575e-c845-4e92-9a25-564f22042d3e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.760743904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7e86575e-c845-4e92-9a25-564f22042d3e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.760890359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:140aff0cdd9277ea770c6323ecd90ecb61121dbffb711201c1944b7f43bdce3f,PodSandboxId:280c9d244093b46427eecf44e0ad6c452997edf05b346d36d238e4a23a700c7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1704757583217964479,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6d2mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f62a3-8c71-409b-a88a-ce556560a9a9,},Annotations:map[string]string{io.kubernetes.container.hash: dc9f20cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b141a91105c79789432c478b1c912c43f3eb17518888ad36b319065644cbf733,PodSandboxId:1905ce1552a4d0160a0a47b415228918dfb116a379ce18b4678eb811e3bb6b7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704757576552252055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3fab4a7-6048-4f1b-bc09-0f520cb5425d,},Annotations:map[string]string{io.kubernetes.container.hash: b74cbc1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d786028dfd6c8ef6f8ce1bff68001910ec00191a4c6ccf5663f75dcd197143,PodSandboxId:95b66c07d8bcce6220e6dc8ca599ceb6ca1dd5d7e5aa12020551ada8542bb1de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1704757576342425028,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-854jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
828e7d7-f559-4d94-8f17-cc970bda8dfd,},Annotations:map[string]string{io.kubernetes.container.hash: c339c244,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9e28475ec7ed33d21301c90841d9db908b4d7017e22bf62f320c3a76b5ad8e,PodSandboxId:03fc019c7dc4f8c5d0987e502e79e3716f542f116c62095ea8d80c2659113f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1704757569251656262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e5bce84fc10e4f9f4a1718cd3986f1,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c136a8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfc4003631b69aed277449f5ec361116653bc86b513fbf4bd6fb0699b02560,PodSandboxId:aec8bd9b041715b592aa1f5a4fcd62a392eaf6c6d23455f7a8f1d09591a4d19c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1704757569295686073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767a3732ea7d2091c3e031a0d862e
14b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dffc050b4e7697f1c2aa559e53dc262d8656b8fbb262777e1c22bf224b74697,PodSandboxId:e216a37ef8f57e7e5177e8ba86a97b2e370945ada6e3ed9c66b45379f507adf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1704757569193638739,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caa289f0cccdc9307723772742b59ce7,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042225eff036e48382e2a024a12262a378e4d4a97369234640c2e2aaa42027d8,PodSandboxId:c6c497f124bcec9cc8b6e4e225349978d820ed912e267d2f61040f80ba773b48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1704757569017581804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037d041ad45088728f69e92136842c8b,},Annotations:map[string
]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7e86575e-c845-4e92-9a25-564f22042d3e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.796001447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=68ff5a10-b158-4bf1-8577-c65c95e137df name=/runtime.v1.RuntimeService/Version
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.796070481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=68ff5a10-b158-4bf1-8577-c65c95e137df name=/runtime.v1.RuntimeService/Version
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.797621951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bbf6df76-f53c-4adb-801c-800895a0ee9d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.798044096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704757591798031398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=bbf6df76-f53c-4adb-801c-800895a0ee9d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.798668705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=59356068-c09b-48f2-baa3-ddd934d25535 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.798738679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=59356068-c09b-48f2-baa3-ddd934d25535 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 23:46:31 test-preload-320518 crio[700]: time="2024-01-08 23:46:31.798887775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:140aff0cdd9277ea770c6323ecd90ecb61121dbffb711201c1944b7f43bdce3f,PodSandboxId:280c9d244093b46427eecf44e0ad6c452997edf05b346d36d238e4a23a700c7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1704757583217964479,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6d2mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960f62a3-8c71-409b-a88a-ce556560a9a9,},Annotations:map[string]string{io.kubernetes.container.hash: dc9f20cb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b141a91105c79789432c478b1c912c43f3eb17518888ad36b319065644cbf733,PodSandboxId:1905ce1552a4d0160a0a47b415228918dfb116a379ce18b4678eb811e3bb6b7a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704757576552252055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3fab4a7-6048-4f1b-bc09-0f520cb5425d,},Annotations:map[string]string{io.kubernetes.container.hash: b74cbc1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d786028dfd6c8ef6f8ce1bff68001910ec00191a4c6ccf5663f75dcd197143,PodSandboxId:95b66c07d8bcce6220e6dc8ca599ceb6ca1dd5d7e5aa12020551ada8542bb1de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1704757576342425028,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-854jw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
828e7d7-f559-4d94-8f17-cc970bda8dfd,},Annotations:map[string]string{io.kubernetes.container.hash: c339c244,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9e28475ec7ed33d21301c90841d9db908b4d7017e22bf62f320c3a76b5ad8e,PodSandboxId:03fc019c7dc4f8c5d0987e502e79e3716f542f116c62095ea8d80c2659113f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1704757569251656262,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4e5bce84fc10e4f9f4a1718cd3986f1,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c136a8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3cfc4003631b69aed277449f5ec361116653bc86b513fbf4bd6fb0699b02560,PodSandboxId:aec8bd9b041715b592aa1f5a4fcd62a392eaf6c6d23455f7a8f1d09591a4d19c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1704757569295686073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 767a3732ea7d2091c3e031a0d862e
14b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dffc050b4e7697f1c2aa559e53dc262d8656b8fbb262777e1c22bf224b74697,PodSandboxId:e216a37ef8f57e7e5177e8ba86a97b2e370945ada6e3ed9c66b45379f507adf3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1704757569193638739,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caa289f0cccdc9307723772742b59ce7,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 9e45a7bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042225eff036e48382e2a024a12262a378e4d4a97369234640c2e2aaa42027d8,PodSandboxId:c6c497f124bcec9cc8b6e4e225349978d820ed912e267d2f61040f80ba773b48,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1704757569017581804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-320518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 037d041ad45088728f69e92136842c8b,},Annotations:map[string
]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=59356068-c09b-48f2-baa3-ddd934d25535 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	140aff0cdd927       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   280c9d244093b       coredns-6d4b75cb6d-6d2mc
	b141a91105c79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   1905ce1552a4d       storage-provisioner
	72d786028dfd6       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   95b66c07d8bcc       kube-proxy-854jw
	e3cfc4003631b       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   aec8bd9b04171       kube-controller-manager-test-preload-320518
	3d9e28475ec7e       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   03fc019c7dc4f       etcd-test-preload-320518
	5dffc050b4e76       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   e216a37ef8f57       kube-apiserver-test-preload-320518
	042225eff036e       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   c6c497f124bce       kube-scheduler-test-preload-320518
	
	
	==> coredns [140aff0cdd9277ea770c6323ecd90ecb61121dbffb711201c1944b7f43bdce3f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:57162 - 46418 "HINFO IN 990638332406869968.794329860336378210. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.029181881s
	
	
	==> describe nodes <==
	Name:               test-preload-320518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-320518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=test-preload-320518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T23_44_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 23:44:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-320518
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 23:46:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 23:46:25 +0000   Mon, 08 Jan 2024 23:44:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 23:46:25 +0000   Mon, 08 Jan 2024 23:44:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 23:46:25 +0000   Mon, 08 Jan 2024 23:44:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 23:46:25 +0000   Mon, 08 Jan 2024 23:46:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    test-preload-320518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 df145bc7f0b14cc583b68d75a4f9d321
	  System UUID:                df145bc7-f0b1-4cc5-83b6-8d75a4f9d321
	  Boot ID:                    938b18d8-c7b1-41a7-9969-20835c5e03bc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6d2mc                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m9s
	  kube-system                 etcd-test-preload-320518                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-apiserver-test-preload-320518             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-controller-manager-test-preload-320518    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-proxy-854jw                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-scheduler-test-preload-320518             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 2m5s               kube-proxy       
	  Normal  Starting                 2m22s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s              kubelet          Node test-preload-320518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s              kubelet          Node test-preload-320518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s              kubelet          Node test-preload-320518 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m12s              kubelet          Node test-preload-320518 status is now: NodeReady
	  Normal  RegisteredNode           2m10s              node-controller  Node test-preload-320518 event: Registered Node test-preload-320518 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)  kubelet          Node test-preload-320518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)  kubelet          Node test-preload-320518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)  kubelet          Node test-preload-320518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-320518 event: Registered Node test-preload-320518 in Controller
	
	
	==> dmesg <==
	[Jan 8 23:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067532] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.356625] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.536013] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152011] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.584180] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.869207] systemd-fstab-generator[626]: Ignoring "noauto" for root device
	[  +0.109190] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.146757] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.104583] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.208562] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[Jan 8 23:46] systemd-fstab-generator[1081]: Ignoring "noauto" for root device
	[  +9.321720] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.655225] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [3d9e28475ec7ed33d21301c90841d9db908b4d7017e22bf62f320c3a76b5ad8e] <==
	{"level":"info","ts":"2024-01-08T23:46:11.226Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1a622f206f99396a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-01-08T23:46:11.227Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-08T23:46:11.227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a switched to configuration voters=(1901133809061542250)"}
	{"level":"info","ts":"2024-01-08T23:46:11.228Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","added-peer-id":"1a622f206f99396a","added-peer-peer-urls":["https://192.168.39.60:2380"]}
	{"level":"info","ts":"2024-01-08T23:46:11.228Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"94dd135126e1e7b0","local-member-id":"1a622f206f99396a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:46:11.228Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T23:46:11.230Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T23:46:11.230Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1a622f206f99396a","initial-advertise-peer-urls":["https://192.168.39.60:2380"],"listen-peer-urls":["https://192.168.39.60:2380"],"advertise-client-urls":["https://192.168.39.60:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.60:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T23:46:11.230Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T23:46:11.237Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-01-08T23:46:11.237Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.60:2380"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgPreVoteResp from 1a622f206f99396a at term 2"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a received MsgVoteResp from 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a622f206f99396a became leader at term 3"}
	{"level":"info","ts":"2024-01-08T23:46:12.280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a622f206f99396a elected leader 1a622f206f99396a at term 3"}
	{"level":"info","ts":"2024-01-08T23:46:12.282Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1a622f206f99396a","local-member-attributes":"{Name:test-preload-320518 ClientURLs:[https://192.168.39.60:2379]}","request-path":"/0/members/1a622f206f99396a/attributes","cluster-id":"94dd135126e1e7b0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T23:46:12.282Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:46:12.283Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T23:46:12.284Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T23:46:12.284Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.60:2379"}
	{"level":"info","ts":"2024-01-08T23:46:12.284Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T23:46:12.284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:46:32 up 1 min,  0 users,  load average: 0.61, 0.22, 0.08
	Linux test-preload-320518 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5dffc050b4e7697f1c2aa559e53dc262d8656b8fbb262777e1c22bf224b74697] <==
	I0108 23:46:14.704573       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 23:46:14.704592       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 23:46:14.704621       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 23:46:14.704971       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 23:46:14.755913       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0108 23:46:14.755983       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0108 23:46:14.814908       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0108 23:46:14.834382       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0108 23:46:14.856633       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0108 23:46:14.889760       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 23:46:14.894803       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0108 23:46:14.899534       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0108 23:46:14.899600       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 23:46:14.901825       1 cache.go:39] Caches are synced for autoregister controller
	I0108 23:46:14.901996       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0108 23:46:15.377224       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 23:46:15.707964       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 23:46:16.710545       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0108 23:46:16.721760       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0108 23:46:16.792643       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0108 23:46:16.818656       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 23:46:16.827876       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 23:46:16.994069       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0108 23:46:27.742189       1 controller.go:611] quota admission added evaluator for: endpoints
	I0108 23:46:27.839971       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e3cfc4003631b69aed277449f5ec361116653bc86b513fbf4bd6fb0699b02560] <==
	I0108 23:46:27.655829       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0108 23:46:27.659160       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0108 23:46:27.661675       1 shared_informer.go:262] Caches are synced for TTL
	I0108 23:46:27.664003       1 shared_informer.go:262] Caches are synced for taint
	I0108 23:46:27.664094       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0108 23:46:27.664159       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0108 23:46:27.664338       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-320518. Assuming now as a timestamp.
	I0108 23:46:27.664567       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0108 23:46:27.664614       1 event.go:294] "Event occurred" object="test-preload-320518" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-320518 event: Registered Node test-preload-320518 in Controller"
	I0108 23:46:27.665171       1 shared_informer.go:262] Caches are synced for GC
	I0108 23:46:27.668718       1 shared_informer.go:262] Caches are synced for ephemeral
	I0108 23:46:27.669947       1 shared_informer.go:262] Caches are synced for daemon sets
	I0108 23:46:27.672417       1 shared_informer.go:262] Caches are synced for job
	I0108 23:46:27.678837       1 shared_informer.go:262] Caches are synced for HPA
	I0108 23:46:27.683167       1 shared_informer.go:262] Caches are synced for endpoint
	I0108 23:46:27.683711       1 shared_informer.go:262] Caches are synced for deployment
	I0108 23:46:27.686046       1 shared_informer.go:262] Caches are synced for persistent volume
	I0108 23:46:27.701528       1 shared_informer.go:262] Caches are synced for stateful set
	I0108 23:46:27.791220       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0108 23:46:27.795162       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0108 23:46:27.835347       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 23:46:27.860055       1 shared_informer.go:262] Caches are synced for resource quota
	I0108 23:46:28.312891       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 23:46:28.343582       1 shared_informer.go:262] Caches are synced for garbage collector
	I0108 23:46:28.343674       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [72d786028dfd6c8ef6f8ce1bff68001910ec00191a4c6ccf5663f75dcd197143] <==
	I0108 23:46:16.940653       1 node.go:163] Successfully retrieved node IP: 192.168.39.60
	I0108 23:46:16.940919       1 server_others.go:138] "Detected node IP" address="192.168.39.60"
	I0108 23:46:16.941003       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0108 23:46:16.981762       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0108 23:46:16.981828       1 server_others.go:206] "Using iptables Proxier"
	I0108 23:46:16.982133       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0108 23:46:16.982399       1 server.go:661] "Version info" version="v1.24.4"
	I0108 23:46:16.982593       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 23:46:16.984187       1 config.go:317] "Starting service config controller"
	I0108 23:46:16.984748       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0108 23:46:16.984859       1 config.go:226] "Starting endpoint slice config controller"
	I0108 23:46:16.985028       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0108 23:46:16.989759       1 config.go:444] "Starting node config controller"
	I0108 23:46:16.989796       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0108 23:46:17.084973       1 shared_informer.go:262] Caches are synced for service config
	I0108 23:46:17.085123       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0108 23:46:17.090887       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [042225eff036e48382e2a024a12262a378e4d4a97369234640c2e2aaa42027d8] <==
	I0108 23:46:11.259749       1 serving.go:348] Generated self-signed cert in-memory
	W0108 23:46:14.772908       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 23:46:14.773566       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 23:46:14.773587       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 23:46:14.773594       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 23:46:14.817834       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0108 23:46:14.817880       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 23:46:14.828960       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0108 23:46:14.829043       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 23:46:14.828980       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 23:46:14.833358       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 23:46:14.934598       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 23:45:33 UTC, ends at Mon 2024-01-08 23:46:32 UTC. --
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.850336    1087 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-320518"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.854516    1087 setters.go:532] "Node became not ready" node="test-preload-320518" condition={Type:Ready Status:False LastHeartbeatTime:2024-01-08 23:46:14.854400498 +0000 UTC m=+7.135995213 LastTransitionTime:2024-01-08 23:46:14.854400498 +0000 UTC m=+7.135995213 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.860414    1087 apiserver.go:52] "Watching apiserver"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.863838    1087 topology_manager.go:200] "Topology Admit Handler"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.864012    1087 topology_manager.go:200] "Topology Admit Handler"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.864063    1087 topology_manager.go:200] "Topology Admit Handler"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: E0108 23:46:14.864767    1087 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-6d2mc" podUID=960f62a3-8c71-409b-a88a-ce556560a9a9
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.931993    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume\") pod \"coredns-6d4b75cb6d-6d2mc\" (UID: \"960f62a3-8c71-409b-a88a-ce556560a9a9\") " pod="kube-system/coredns-6d4b75cb6d-6d2mc"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932112    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwcxk\" (UniqueName: \"kubernetes.io/projected/960f62a3-8c71-409b-a88a-ce556560a9a9-kube-api-access-mwcxk\") pod \"coredns-6d4b75cb6d-6d2mc\" (UID: \"960f62a3-8c71-409b-a88a-ce556560a9a9\") " pod="kube-system/coredns-6d4b75cb6d-6d2mc"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932144    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9828e7d7-f559-4d94-8f17-cc970bda8dfd-xtables-lock\") pod \"kube-proxy-854jw\" (UID: \"9828e7d7-f559-4d94-8f17-cc970bda8dfd\") " pod="kube-system/kube-proxy-854jw"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932162    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b3fab4a7-6048-4f1b-bc09-0f520cb5425d-tmp\") pod \"storage-provisioner\" (UID: \"b3fab4a7-6048-4f1b-bc09-0f520cb5425d\") " pod="kube-system/storage-provisioner"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932184    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmqd\" (UniqueName: \"kubernetes.io/projected/9828e7d7-f559-4d94-8f17-cc970bda8dfd-kube-api-access-nwmqd\") pod \"kube-proxy-854jw\" (UID: \"9828e7d7-f559-4d94-8f17-cc970bda8dfd\") " pod="kube-system/kube-proxy-854jw"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932203    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9828e7d7-f559-4d94-8f17-cc970bda8dfd-kube-proxy\") pod \"kube-proxy-854jw\" (UID: \"9828e7d7-f559-4d94-8f17-cc970bda8dfd\") " pod="kube-system/kube-proxy-854jw"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932220    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4cb\" (UniqueName: \"kubernetes.io/projected/b3fab4a7-6048-4f1b-bc09-0f520cb5425d-kube-api-access-cq4cb\") pod \"storage-provisioner\" (UID: \"b3fab4a7-6048-4f1b-bc09-0f520cb5425d\") " pod="kube-system/storage-provisioner"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932242    1087 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9828e7d7-f559-4d94-8f17-cc970bda8dfd-lib-modules\") pod \"kube-proxy-854jw\" (UID: \"9828e7d7-f559-4d94-8f17-cc970bda8dfd\") " pod="kube-system/kube-proxy-854jw"
	Jan 08 23:46:14 test-preload-320518 kubelet[1087]: I0108 23:46:14.932259    1087 reconciler.go:159] "Reconciler: start to sync state"
	Jan 08 23:46:15 test-preload-320518 kubelet[1087]: E0108 23:46:15.036398    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 23:46:15 test-preload-320518 kubelet[1087]: E0108 23:46:15.036568    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume podName:960f62a3-8c71-409b-a88a-ce556560a9a9 nodeName:}" failed. No retries permitted until 2024-01-08 23:46:15.536534121 +0000 UTC m=+7.818128854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume") pod "coredns-6d4b75cb6d-6d2mc" (UID: "960f62a3-8c71-409b-a88a-ce556560a9a9") : object "kube-system"/"coredns" not registered
	Jan 08 23:46:15 test-preload-320518 kubelet[1087]: E0108 23:46:15.541393    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 23:46:15 test-preload-320518 kubelet[1087]: E0108 23:46:15.541537    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume podName:960f62a3-8c71-409b-a88a-ce556560a9a9 nodeName:}" failed. No retries permitted until 2024-01-08 23:46:16.541519645 +0000 UTC m=+8.823114364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume") pod "coredns-6d4b75cb6d-6d2mc" (UID: "960f62a3-8c71-409b-a88a-ce556560a9a9") : object "kube-system"/"coredns" not registered
	Jan 08 23:46:16 test-preload-320518 kubelet[1087]: E0108 23:46:16.548290    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 23:46:16 test-preload-320518 kubelet[1087]: E0108 23:46:16.548376    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume podName:960f62a3-8c71-409b-a88a-ce556560a9a9 nodeName:}" failed. No retries permitted until 2024-01-08 23:46:18.548359949 +0000 UTC m=+10.829954664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume") pod "coredns-6d4b75cb6d-6d2mc" (UID: "960f62a3-8c71-409b-a88a-ce556560a9a9") : object "kube-system"/"coredns" not registered
	Jan 08 23:46:17 test-preload-320518 kubelet[1087]: E0108 23:46:17.001169    1087 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-6d2mc" podUID=960f62a3-8c71-409b-a88a-ce556560a9a9
	Jan 08 23:46:18 test-preload-320518 kubelet[1087]: E0108 23:46:18.567589    1087 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 23:46:18 test-preload-320518 kubelet[1087]: E0108 23:46:18.567687    1087 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume podName:960f62a3-8c71-409b-a88a-ce556560a9a9 nodeName:}" failed. No retries permitted until 2024-01-08 23:46:22.567669644 +0000 UTC m=+14.849264361 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/960f62a3-8c71-409b-a88a-ce556560a9a9-config-volume") pod "coredns-6d4b75cb6d-6d2mc" (UID: "960f62a3-8c71-409b-a88a-ce556560a9a9") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [b141a91105c79789432c478b1c912c43f3eb17518888ad36b319065644cbf733] <==
	I0108 23:46:16.853686       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-320518 -n test-preload-320518
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-320518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-320518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-320518
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-320518: (1.117150304s)
--- FAIL: TestPreload (226.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (142.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.799019399.exe start -p running-upgrade-967805 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0108 23:49:19.627951  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.799019399.exe start -p running-upgrade-967805 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m16.142655226s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-967805 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-967805 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (4.146815959s)

                                                
                                                
-- stdout --
	* [running-upgrade-967805] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-967805 in cluster running-upgrade-967805
	* Updating the running kvm2 "running-upgrade-967805" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:51:02.641907  434244 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:51:02.642194  434244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:51:02.642205  434244 out.go:309] Setting ErrFile to fd 2...
	I0108 23:51:02.642222  434244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:51:02.642415  434244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:51:02.643058  434244 out.go:303] Setting JSON to false
	I0108 23:51:02.644254  434244 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16389,"bootTime":1704741474,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:51:02.644338  434244 start.go:138] virtualization: kvm guest
	I0108 23:51:02.646840  434244 out.go:177] * [running-upgrade-967805] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:51:02.648575  434244 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:51:02.648589  434244 notify.go:220] Checking for updates...
	I0108 23:51:02.652266  434244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:51:02.653886  434244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:51:02.655424  434244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:51:02.656789  434244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:51:02.658100  434244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:51:02.659889  434244 config.go:182] Loaded profile config "running-upgrade-967805": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 23:51:02.659905  434244 start_flags.go:694] config upgrade: Driver=kvm2
	I0108 23:51:02.659917  434244 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:51:02.659999  434244 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/running-upgrade-967805/config.json ...
	I0108 23:51:02.660720  434244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:51:02.660791  434244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:51:02.676130  434244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0108 23:51:02.676611  434244 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:51:02.677198  434244 main.go:141] libmachine: Using API Version  1
	I0108 23:51:02.677225  434244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:51:02.677582  434244 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:51:02.677779  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:02.679872  434244 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:51:02.681264  434244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:51:02.681583  434244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:51:02.681625  434244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:51:02.701786  434244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44637
	I0108 23:51:02.702312  434244 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:51:02.702929  434244 main.go:141] libmachine: Using API Version  1
	I0108 23:51:02.702982  434244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:51:02.703385  434244 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:51:02.703637  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:02.739324  434244 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 23:51:02.740703  434244 start.go:298] selected driver: kvm2
	I0108 23:51:02.740718  434244 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-967805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.246 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 23:51:02.740801  434244 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:51:02.741533  434244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.741620  434244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 23:51:02.757988  434244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 23:51:02.758399  434244 cni.go:84] Creating CNI manager for ""
	I0108 23:51:02.758418  434244 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 23:51:02.758432  434244 start_flags.go:323] config:
	{Name:running-upgrade-967805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.246 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 23:51:02.758613  434244 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.760520  434244 out.go:177] * Starting control plane node running-upgrade-967805 in cluster running-upgrade-967805
	I0108 23:51:02.761778  434244 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0108 23:51:02.790331  434244 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 23:51:02.790480  434244 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/running-upgrade-967805/config.json ...
	I0108 23:51:02.790616  434244 cache.go:107] acquiring lock: {Name:mke1abc68d57b011e235f2a693188c932692ee34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790627  434244 cache.go:107] acquiring lock: {Name:mk8836388825799782229dd3205a885146fd215a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790650  434244 cache.go:107] acquiring lock: {Name:mkef4b86bda257efecd7db620b6345c0a31887de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790714  434244 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:51:02.790701  434244 cache.go:107] acquiring lock: {Name:mk4458dbe72fbfe64b20ae9545e888e732f8294a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790732  434244 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.106µs
	I0108 23:51:02.790718  434244 cache.go:107] acquiring lock: {Name:mk231c264cad9b303f6ec8b22af8bdd252b4fdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790764  434244 start.go:365] acquiring machines lock for running-upgrade-967805: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:51:02.790747  434244 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:51:02.790766  434244 cache.go:107] acquiring lock: {Name:mke10ad9f35f7f95d57560cb76caa952d55edb07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790795  434244 cache.go:107] acquiring lock: {Name:mkfa26d6386c5ed2f5fcb97e3eab352e7353f242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790799  434244 cache.go:107] acquiring lock: {Name:mk72c03f967f717309a26a5e526633388069ab03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:51:02.790850  434244 start.go:369] acquired machines lock for "running-upgrade-967805" in 68.39µs
	I0108 23:51:02.790872  434244 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:51:02.790880  434244 fix.go:54] fixHost starting: minikube
	I0108 23:51:02.790878  434244 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 23:51:02.790933  434244 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 23:51:02.790966  434244 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:51:02.790880  434244 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 23:51:02.791155  434244 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0108 23:51:02.791179  434244 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 23:51:02.791182  434244 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 23:51:02.791307  434244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:51:02.791646  434244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:51:02.792142  434244 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 23:51:02.792172  434244 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 23:51:02.792146  434244 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 23:51:02.792192  434244 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 23:51:02.792391  434244 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 23:51:02.792546  434244 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 23:51:02.792673  434244 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0108 23:51:02.809112  434244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0108 23:51:02.809546  434244 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:51:02.809987  434244 main.go:141] libmachine: Using API Version  1
	I0108 23:51:02.810012  434244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:51:02.810405  434244 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:51:02.810624  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:02.810839  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetState
	I0108 23:51:02.812497  434244 fix.go:102] recreateIfNeeded on running-upgrade-967805: state=Running err=<nil>
	W0108 23:51:02.812534  434244 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:51:02.814337  434244 out.go:177] * Updating the running kvm2 "running-upgrade-967805" VM ...
	I0108 23:51:02.815708  434244 machine.go:88] provisioning docker machine ...
	I0108 23:51:02.815735  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:02.815944  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetMachineName
	I0108 23:51:02.816110  434244 buildroot.go:166] provisioning hostname "running-upgrade-967805"
	I0108 23:51:02.816129  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetMachineName
	I0108 23:51:02.816295  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:02.818977  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:02.819480  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:02.819511  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:02.819661  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:02.819816  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:02.819951  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:02.820101  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:02.820246  434244 main.go:141] libmachine: Using SSH client type: native
	I0108 23:51:02.820603  434244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0108 23:51:02.820618  434244 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-967805 && echo "running-upgrade-967805" | sudo tee /etc/hostname
	I0108 23:51:02.960583  434244 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-967805
	
	I0108 23:51:02.960620  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:02.963824  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:02.964262  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:02.964295  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:02.964469  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:02.964678  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:02.964897  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:02.965092  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:02.965321  434244 main.go:141] libmachine: Using SSH client type: native
	I0108 23:51:02.965786  434244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0108 23:51:02.965821  434244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-967805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-967805/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-967805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:51:02.972558  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 23:51:03.011976  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0108 23:51:03.037262  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 23:51:03.051394  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0108 23:51:03.051455  434244 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 260.82127ms
	I0108 23:51:03.051472  434244 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0108 23:51:03.051634  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0108 23:51:03.088993  434244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:51:03.089067  434244 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:51:03.089117  434244 buildroot.go:174] setting up certificates
	I0108 23:51:03.089139  434244 provision.go:83] configureAuth start
	I0108 23:51:03.089160  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetMachineName
	I0108 23:51:03.089502  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetIP
	I0108 23:51:03.089715  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0108 23:51:03.092066  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0108 23:51:03.092802  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.093261  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:03.093292  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.093495  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:03.096324  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.096787  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:03.096827  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.097048  434244 provision.go:138] copyHostCerts
	I0108 23:51:03.097120  434244 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:51:03.097135  434244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:51:03.097191  434244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:51:03.097314  434244 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:51:03.097326  434244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:51:03.097354  434244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:51:03.097462  434244 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:51:03.097474  434244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:51:03.097503  434244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:51:03.097590  434244 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-967805 san=[192.168.50.246 192.168.50.246 localhost 127.0.0.1 minikube running-upgrade-967805]
	I0108 23:51:03.174392  434244 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0108 23:51:03.421044  434244 provision.go:172] copyRemoteCerts
	I0108 23:51:03.421132  434244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:51:03.421164  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:03.424840  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.425371  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:03.425413  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.425702  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:03.425924  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:03.426124  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:03.426352  434244 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/running-upgrade-967805/id_rsa Username:docker}
	I0108 23:51:03.518641  434244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:51:03.552068  434244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:51:03.571290  434244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 23:51:03.602142  434244 provision.go:86] duration metric: configureAuth took 512.981471ms
	I0108 23:51:03.602223  434244 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:51:03.602582  434244 config.go:182] Loaded profile config "running-upgrade-967805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 23:51:03.602780  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:03.605974  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.606543  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:03.606565  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:03.606588  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:03.606665  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:03.606745  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:03.606838  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:03.606995  434244 main.go:141] libmachine: Using SSH client type: native
	I0108 23:51:03.607396  434244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0108 23:51:03.607448  434244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:51:03.632602  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0108 23:51:03.632644  434244 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 841.879154ms
	I0108 23:51:03.632663  434244 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0108 23:51:03.951082  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0108 23:51:03.951174  434244 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.160563613s
	I0108 23:51:03.951204  434244 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0108 23:51:04.164961  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0108 23:51:04.164990  434244 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.374197661s
	I0108 23:51:04.165018  434244 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0108 23:51:04.274520  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0108 23:51:04.274558  434244 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.483867196s
	I0108 23:51:04.274571  434244 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0108 23:51:04.274810  434244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:51:04.274837  434244 machine.go:91] provisioned docker machine in 1.459109297s
	I0108 23:51:04.274848  434244 start.go:300] post-start starting for "running-upgrade-967805" (driver="kvm2")
	I0108 23:51:04.274880  434244 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:51:04.274910  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:04.275623  434244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:51:04.275666  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:04.278462  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.278830  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:04.278852  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.279115  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:04.283557  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:04.283763  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:04.283898  434244 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/running-upgrade-967805/id_rsa Username:docker}
	I0108 23:51:04.371442  434244 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:51:04.377228  434244 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 23:51:04.377250  434244 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:51:04.377325  434244 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:51:04.377421  434244 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:51:04.377527  434244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:51:04.385327  434244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:51:04.407563  434244 start.go:303] post-start completed in 132.681949ms
	I0108 23:51:04.407594  434244 fix.go:56] fixHost completed within 1.616713796s
	I0108 23:51:04.407621  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:04.410860  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.411272  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:04.411308  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.411467  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:04.411658  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:04.411862  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:04.412034  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:04.412218  434244 main.go:141] libmachine: Using SSH client type: native
	I0108 23:51:04.412733  434244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I0108 23:51:04.412753  434244 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 23:51:04.536988  434244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704757864.532218962
	
	I0108 23:51:04.537015  434244 fix.go:206] guest clock: 1704757864.532218962
	I0108 23:51:04.537026  434244 fix.go:219] Guest: 2024-01-08 23:51:04.532218962 +0000 UTC Remote: 2024-01-08 23:51:04.407599217 +0000 UTC m=+1.823498114 (delta=124.619745ms)
	I0108 23:51:04.537051  434244 fix.go:190] guest clock delta is within tolerance: 124.619745ms
	I0108 23:51:04.537065  434244 start.go:83] releasing machines lock for "running-upgrade-967805", held for 1.746197072s
	I0108 23:51:04.537093  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:04.537377  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetIP
	I0108 23:51:04.540321  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.540847  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:04.540885  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.541062  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:04.541686  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:04.542007  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .DriverName
	I0108 23:51:04.542141  434244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:51:04.542188  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:04.542313  434244 ssh_runner.go:195] Run: cat /version.json
	I0108 23:51:04.542362  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHHostname
	I0108 23:51:04.545272  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.545566  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.545749  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:04.545790  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.545918  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:04.546038  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:50:93", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:49:16 +0000 UTC Type:0 Mac:52:54:00:d1:50:93 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:running-upgrade-967805 Clientid:01:52:54:00:d1:50:93}
	I0108 23:51:04.546069  434244 main.go:141] libmachine: (running-upgrade-967805) DBG | domain running-upgrade-967805 has defined IP address 192.168.50.246 and MAC address 52:54:00:d1:50:93 in network minikube-net
	I0108 23:51:04.546120  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:04.546281  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:04.546344  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHPort
	I0108 23:51:04.546506  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHKeyPath
	I0108 23:51:04.546520  434244 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/running-upgrade-967805/id_rsa Username:docker}
	I0108 23:51:04.546642  434244 main.go:141] libmachine: (running-upgrade-967805) Calling .GetSSHUsername
	I0108 23:51:04.546736  434244 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/running-upgrade-967805/id_rsa Username:docker}
	W0108 23:51:04.676797  434244 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:51:04.797521  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 23:51:04.797598  434244 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.006899506s
	I0108 23:51:04.797619  434244 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 23:51:04.974748  434244 cache.go:157] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0108 23:51:04.974781  434244 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.183986041s
	I0108 23:51:04.974797  434244 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0108 23:51:04.974815  434244 cache.go:87] Successfully saved all images to host disk.
	I0108 23:51:04.974868  434244 ssh_runner.go:195] Run: systemctl --version
	I0108 23:51:04.980253  434244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:51:05.085518  434244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 23:51:05.095675  434244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:51:05.095758  434244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:51:05.103011  434244 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 23:51:05.103041  434244 start.go:475] detecting cgroup driver to use...
	I0108 23:51:05.103123  434244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:51:05.116708  434244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:51:05.143460  434244 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:51:05.143533  434244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:51:05.154925  434244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:51:05.167217  434244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:51:05.177542  434244 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:51:05.177617  434244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:51:05.321807  434244 docker.go:219] disabling docker service ...
	I0108 23:51:05.321890  434244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:51:06.345451  434244 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.023526859s)
	I0108 23:51:06.345549  434244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:51:06.366806  434244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:51:06.475299  434244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:51:06.674769  434244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:51:06.687611  434244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:51:06.704898  434244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 23:51:06.704983  434244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:51:06.714453  434244 out.go:177] 
	W0108 23:51:06.716193  434244 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:51:06.716218  434244 out.go:239] * 
	* 
	W0108 23:51:06.717494  434244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:51:06.719579  434244 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-967805 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 23:51:06.740208403 +0000 UTC m=+3565.617158626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-967805 -n running-upgrade-967805
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-967805 -n running-upgrade-967805: exit status 4 (285.901184ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:51:06.985408  434293 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-967805" does not appear in /home/jenkins/minikube-integration/17830-399915/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-967805" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-967805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-967805
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-967805: (1.282957733s)
--- FAIL: TestRunningBinaryUpgrade (142.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (306.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.522804985.exe start -p stopped-upgrade-621247 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.522804985.exe start -p stopped-upgrade-621247 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m22.696032173s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.522804985.exe -p stopped-upgrade-621247 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.522804985.exe -p stopped-upgrade-621247 stop: (1m32.411736803s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-621247 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-621247 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m11.601917576s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-621247] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-621247 in cluster stopped-upgrade-621247
	* Restarting existing kvm2 VM for "stopped-upgrade-621247" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:52:29.651619  435691 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:52:29.651869  435691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:52:29.651879  435691 out.go:309] Setting ErrFile to fd 2...
	I0108 23:52:29.651886  435691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:52:29.652083  435691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:52:29.652636  435691 out.go:303] Setting JSON to false
	I0108 23:52:29.653630  435691 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16476,"bootTime":1704741474,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:52:29.653699  435691 start.go:138] virtualization: kvm guest
	I0108 23:52:29.655773  435691 out.go:177] * [stopped-upgrade-621247] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:52:29.657855  435691 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:52:29.657881  435691 notify.go:220] Checking for updates...
	I0108 23:52:29.659284  435691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:52:29.660959  435691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:52:29.662547  435691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:52:29.665507  435691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:52:29.666567  435691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:52:29.668588  435691 config.go:182] Loaded profile config "stopped-upgrade-621247": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 23:52:29.668610  435691 start_flags.go:694] config upgrade: Driver=kvm2
	I0108 23:52:29.668624  435691 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0108 23:52:29.668736  435691 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/stopped-upgrade-621247/config.json ...
	I0108 23:52:29.669638  435691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:52:29.669694  435691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:52:29.685349  435691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45753
	I0108 23:52:29.685761  435691 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:52:29.686411  435691 main.go:141] libmachine: Using API Version  1
	I0108 23:52:29.686441  435691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:52:29.686808  435691 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:52:29.687036  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:52:29.688842  435691 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:52:29.690271  435691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:52:29.690582  435691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:52:29.690618  435691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:52:29.705583  435691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0108 23:52:29.706045  435691 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:52:29.706589  435691 main.go:141] libmachine: Using API Version  1
	I0108 23:52:29.706626  435691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:52:29.706966  435691 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:52:29.707151  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:52:29.743790  435691 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 23:52:29.745265  435691 start.go:298] selected driver: kvm2
	I0108 23:52:29.745285  435691 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-621247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.84 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 23:52:29.745403  435691 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:52:29.746430  435691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.746539  435691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 23:52:29.761710  435691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 23:52:29.762198  435691 cni.go:84] Creating CNI manager for ""
	I0108 23:52:29.762222  435691 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 23:52:29.762238  435691 start_flags.go:323] config:
	{Name:stopped-upgrade-621247 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.84 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 23:52:29.762461  435691 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.764181  435691 out.go:177] * Starting control plane node stopped-upgrade-621247 in cluster stopped-upgrade-621247
	I0108 23:52:29.765402  435691 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0108 23:52:29.800614  435691 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 23:52:29.800724  435691 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/stopped-upgrade-621247/config.json ...
	I0108 23:52:29.800841  435691 cache.go:107] acquiring lock: {Name:mk8836388825799782229dd3205a885146fd215a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.800892  435691 cache.go:107] acquiring lock: {Name:mk231c264cad9b303f6ec8b22af8bdd252b4fdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.800889  435691 cache.go:107] acquiring lock: {Name:mkef4b86bda257efecd7db620b6345c0a31887de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.800958  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0108 23:52:29.800958  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 23:52:29.800953  435691 cache.go:107] acquiring lock: {Name:mk4458dbe72fbfe64b20ae9545e888e732f8294a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.800989  435691 start.go:365] acquiring machines lock for stopped-upgrade-621247: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 23:52:29.801001  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0108 23:52:29.800994  435691 cache.go:107] acquiring lock: {Name:mk72c03f967f717309a26a5e526633388069ab03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.801059  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0108 23:52:29.801041  435691 cache.go:107] acquiring lock: {Name:mkfa26d6386c5ed2f5fcb97e3eab352e7353f242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.801078  435691 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 88.64µs
	I0108 23:52:29.801093  435691 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0108 23:52:29.800971  435691 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 78.348µs
	I0108 23:52:29.801104  435691 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 23:52:29.801012  435691 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 133.19µs
	I0108 23:52:29.801120  435691 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0108 23:52:29.800971  435691 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 142.132µs
	I0108 23:52:29.801133  435691 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0108 23:52:29.801076  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0108 23:52:29.801146  435691 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 220.667µs
	I0108 23:52:29.801141  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0108 23:52:29.801156  435691 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0108 23:52:29.800850  435691 cache.go:107] acquiring lock: {Name:mke1abc68d57b011e235f2a693188c932692ee34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.801161  435691 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 200.825µs
	I0108 23:52:29.801135  435691 cache.go:107] acquiring lock: {Name:mke10ad9f35f7f95d57560cb76caa952d55edb07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:52:29.801193  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:52:29.801207  435691 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 359.149µs
	I0108 23:52:29.801222  435691 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:52:29.801174  435691 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0108 23:52:29.801270  435691 cache.go:115] /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0108 23:52:29.801282  435691 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 191.072µs
	I0108 23:52:29.801296  435691 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0108 23:52:29.801311  435691 cache.go:87] Successfully saved all images to host disk.
	I0108 23:53:01.996256  435691 start.go:369] acquired machines lock for "stopped-upgrade-621247" in 32.195220051s
	I0108 23:53:01.996316  435691 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:53:01.996327  435691 fix.go:54] fixHost starting: minikube
	I0108 23:53:01.996739  435691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:53:01.996789  435691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:53:02.013367  435691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I0108 23:53:02.013818  435691 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:53:02.014413  435691 main.go:141] libmachine: Using API Version  1
	I0108 23:53:02.014442  435691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:53:02.014773  435691 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:53:02.014989  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:02.015166  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetState
	I0108 23:53:02.016612  435691 fix.go:102] recreateIfNeeded on stopped-upgrade-621247: state=Stopped err=<nil>
	I0108 23:53:02.016650  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	W0108 23:53:02.016822  435691 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:53:02.019082  435691 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-621247" ...
	I0108 23:53:02.020680  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .Start
	I0108 23:53:02.020877  435691 main.go:141] libmachine: (stopped-upgrade-621247) Ensuring networks are active...
	I0108 23:53:02.021516  435691 main.go:141] libmachine: (stopped-upgrade-621247) Ensuring network default is active
	I0108 23:53:02.021775  435691 main.go:141] libmachine: (stopped-upgrade-621247) Ensuring network minikube-net is active
	I0108 23:53:02.022102  435691 main.go:141] libmachine: (stopped-upgrade-621247) Getting domain xml...
	I0108 23:53:02.022676  435691 main.go:141] libmachine: (stopped-upgrade-621247) Creating domain...
	I0108 23:53:03.282150  435691 main.go:141] libmachine: (stopped-upgrade-621247) Waiting to get IP...
	I0108 23:53:03.283308  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:03.283743  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:03.283846  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:03.283726  436481 retry.go:31] will retry after 274.619622ms: waiting for machine to come up
	I0108 23:53:03.560388  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:03.560992  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:03.561025  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:03.560936  436481 retry.go:31] will retry after 240.267447ms: waiting for machine to come up
	I0108 23:53:03.802398  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:03.802942  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:03.802972  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:03.802888  436481 retry.go:31] will retry after 466.578908ms: waiting for machine to come up
	I0108 23:53:04.271723  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:04.272200  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:04.272225  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:04.272151  436481 retry.go:31] will retry after 505.591644ms: waiting for machine to come up
	I0108 23:53:04.779932  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:04.780417  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:04.780449  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:04.780368  436481 retry.go:31] will retry after 650.79163ms: waiting for machine to come up
	I0108 23:53:05.433429  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:05.433865  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:05.433909  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:05.433813  436481 retry.go:31] will retry after 839.970616ms: waiting for machine to come up
	I0108 23:53:06.276330  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:06.276846  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:06.276908  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:06.276773  436481 retry.go:31] will retry after 1.129907002s: waiting for machine to come up
	I0108 23:53:07.408706  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:07.409214  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:07.409238  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:07.409159  436481 retry.go:31] will retry after 1.453831521s: waiting for machine to come up
	I0108 23:53:08.864657  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:08.865135  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:08.865167  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:08.865076  436481 retry.go:31] will retry after 1.13249831s: waiting for machine to come up
	I0108 23:53:09.999563  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:10.000234  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:10.000263  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:10.000173  436481 retry.go:31] will retry after 1.699152903s: waiting for machine to come up
	I0108 23:53:11.702133  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:11.702675  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:11.702709  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:11.702616  436481 retry.go:31] will retry after 2.735105733s: waiting for machine to come up
	I0108 23:53:14.440817  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:14.441378  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:14.441414  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:14.441318  436481 retry.go:31] will retry after 2.289111788s: waiting for machine to come up
	I0108 23:53:16.732701  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:16.733202  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:16.733230  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:16.733155  436481 retry.go:31] will retry after 3.580262417s: waiting for machine to come up
	I0108 23:53:20.317218  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:20.317720  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:20.317750  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:20.317672  436481 retry.go:31] will retry after 5.566756008s: waiting for machine to come up
	I0108 23:53:25.888752  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:25.889416  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:25.889451  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:25.889344  436481 retry.go:31] will retry after 4.274708539s: waiting for machine to come up
	I0108 23:53:30.167969  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:30.168443  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | unable to find current IP address of domain stopped-upgrade-621247 in network minikube-net
	I0108 23:53:30.168479  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | I0108 23:53:30.168373  436481 retry.go:31] will retry after 8.021329306s: waiting for machine to come up
	I0108 23:53:38.192072  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.192540  435691 main.go:141] libmachine: (stopped-upgrade-621247) Found IP for machine: 192.168.50.84
	I0108 23:53:38.192575  435691 main.go:141] libmachine: (stopped-upgrade-621247) Reserving static IP address...
	I0108 23:53:38.192609  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has current primary IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.193075  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "stopped-upgrade-621247", mac: "52:54:00:fb:ff:fd", ip: "192.168.50.84"} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.193111  435691 main.go:141] libmachine: (stopped-upgrade-621247) Reserved static IP address: 192.168.50.84
	I0108 23:53:38.193130  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-621247", mac: "52:54:00:fb:ff:fd", ip: "192.168.50.84"}
	I0108 23:53:38.193151  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | Getting to WaitForSSH function...
	I0108 23:53:38.193168  435691 main.go:141] libmachine: (stopped-upgrade-621247) Waiting for SSH to be available...
	I0108 23:53:38.195598  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.196014  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.196055  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.196129  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | Using SSH client type: external
	I0108 23:53:38.196192  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/stopped-upgrade-621247/id_rsa (-rw-------)
	I0108 23:53:38.196232  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/stopped-upgrade-621247/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 23:53:38.196255  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | About to run SSH command:
	I0108 23:53:38.196269  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | exit 0
	I0108 23:53:38.327049  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | SSH cmd err, output: <nil>: 
	I0108 23:53:38.327422  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetConfigRaw
	I0108 23:53:38.328146  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetIP
	I0108 23:53:38.330805  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.331192  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.331238  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.331497  435691 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/stopped-upgrade-621247/config.json ...
	I0108 23:53:38.331667  435691 machine.go:88] provisioning docker machine ...
	I0108 23:53:38.331688  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:38.331920  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetMachineName
	I0108 23:53:38.332102  435691 buildroot.go:166] provisioning hostname "stopped-upgrade-621247"
	I0108 23:53:38.332122  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetMachineName
	I0108 23:53:38.332276  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:38.334337  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.334787  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.334812  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.334927  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:38.335121  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.335282  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.335427  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:38.335593  435691 main.go:141] libmachine: Using SSH client type: native
	I0108 23:53:38.335928  435691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I0108 23:53:38.335943  435691 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-621247 && echo "stopped-upgrade-621247" | sudo tee /etc/hostname
	I0108 23:53:38.461003  435691 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-621247
	
	I0108 23:53:38.461035  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:38.463698  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.464081  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.464111  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.464304  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:38.464488  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.464659  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.464814  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:38.464976  435691 main.go:141] libmachine: Using SSH client type: native
	I0108 23:53:38.465455  435691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I0108 23:53:38.465483  435691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-621247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-621247/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-621247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:53:38.587929  435691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:53:38.587967  435691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0108 23:53:38.587991  435691 buildroot.go:174] setting up certificates
	I0108 23:53:38.588003  435691 provision.go:83] configureAuth start
	I0108 23:53:38.588012  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetMachineName
	I0108 23:53:38.588331  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetIP
	I0108 23:53:38.590810  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.591152  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.591173  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.591394  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:38.593483  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.593790  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.593823  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.593975  435691 provision.go:138] copyHostCerts
	I0108 23:53:38.594049  435691 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0108 23:53:38.594063  435691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0108 23:53:38.594141  435691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0108 23:53:38.594269  435691 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0108 23:53:38.594282  435691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0108 23:53:38.594324  435691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0108 23:53:38.594419  435691 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0108 23:53:38.594429  435691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0108 23:53:38.594462  435691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0108 23:53:38.594539  435691 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-621247 san=[192.168.50.84 192.168.50.84 localhost 127.0.0.1 minikube stopped-upgrade-621247]
	I0108 23:53:38.669721  435691 provision.go:172] copyRemoteCerts
	I0108 23:53:38.669821  435691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:53:38.669865  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:38.672759  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.673155  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.673187  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.673524  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:38.673728  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.673960  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:38.674091  435691 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/stopped-upgrade-621247/id_rsa Username:docker}
	I0108 23:53:38.761251  435691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:53:38.774824  435691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 23:53:38.787731  435691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 23:53:38.800116  435691 provision.go:86] duration metric: configureAuth took 212.100771ms
	I0108 23:53:38.800139  435691 buildroot.go:189] setting minikube options for container-runtime
	I0108 23:53:38.800281  435691 config.go:182] Loaded profile config "stopped-upgrade-621247": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 23:53:38.800358  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:38.802881  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.803283  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:38.803320  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:38.803487  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:38.803678  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.803885  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:38.804063  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:38.804293  435691 main.go:141] libmachine: Using SSH client type: native
	I0108 23:53:38.804597  435691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I0108 23:53:38.804612  435691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:53:40.331983  435691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:53:40.332012  435691 machine.go:91] provisioned docker machine in 2.000330268s
	I0108 23:53:40.332025  435691 start.go:300] post-start starting for "stopped-upgrade-621247" (driver="kvm2")
	I0108 23:53:40.332072  435691 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:53:40.332119  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:40.332520  435691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:53:40.332552  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:40.335177  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.335589  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:40.335618  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.335775  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:40.335967  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:40.336201  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:40.336345  435691 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/stopped-upgrade-621247/id_rsa Username:docker}
	I0108 23:53:40.421631  435691 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:53:40.425543  435691 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 23:53:40.425568  435691 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0108 23:53:40.425632  435691 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0108 23:53:40.425703  435691 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0108 23:53:40.425787  435691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:53:40.430833  435691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0108 23:53:40.443217  435691 start.go:303] post-start completed in 111.178856ms
	I0108 23:53:40.443235  435691 fix.go:56] fixHost completed within 38.446909612s
	I0108 23:53:40.443259  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:40.446061  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.446437  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:40.446473  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.446592  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:40.446798  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:40.446979  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:40.447106  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:40.447275  435691 main.go:141] libmachine: Using SSH client type: native
	I0108 23:53:40.447643  435691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I0108 23:53:40.447656  435691 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 23:53:40.568039  435691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758020.507952446
	
	I0108 23:53:40.568071  435691 fix.go:206] guest clock: 1704758020.507952446
	I0108 23:53:40.568080  435691 fix.go:219] Guest: 2024-01-08 23:53:40.507952446 +0000 UTC Remote: 2024-01-08 23:53:40.443239289 +0000 UTC m=+70.850540478 (delta=64.713157ms)
	I0108 23:53:40.568123  435691 fix.go:190] guest clock delta is within tolerance: 64.713157ms
	I0108 23:53:40.568130  435691 start.go:83] releasing machines lock for "stopped-upgrade-621247", held for 38.571845755s
	I0108 23:53:40.568166  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:40.568494  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetIP
	I0108 23:53:40.571318  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.571753  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:40.571801  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.571931  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:40.572610  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:40.572803  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .DriverName
	I0108 23:53:40.572909  435691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:53:40.572956  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:40.573052  435691 ssh_runner.go:195] Run: cat /version.json
	I0108 23:53:40.573080  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHHostname
	I0108 23:53:40.575608  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.575947  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.575983  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:40.576008  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.576147  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:40.576354  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:40.576438  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ff:fd", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-09 00:53:26 +0000 UTC Type:0 Mac:52:54:00:fb:ff:fd Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:stopped-upgrade-621247 Clientid:01:52:54:00:fb:ff:fd}
	I0108 23:53:40.576461  435691 main.go:141] libmachine: (stopped-upgrade-621247) DBG | domain stopped-upgrade-621247 has defined IP address 192.168.50.84 and MAC address 52:54:00:fb:ff:fd in network minikube-net
	I0108 23:53:40.576538  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:40.576648  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHPort
	I0108 23:53:40.576725  435691 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/stopped-upgrade-621247/id_rsa Username:docker}
	I0108 23:53:40.576806  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHKeyPath
	I0108 23:53:40.576972  435691 main.go:141] libmachine: (stopped-upgrade-621247) Calling .GetSSHUsername
	I0108 23:53:40.577139  435691 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/stopped-upgrade-621247/id_rsa Username:docker}
	W0108 23:53:40.664734  435691 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:53:40.664834  435691 ssh_runner.go:195] Run: systemctl --version
	I0108 23:53:40.692125  435691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:53:40.819834  435691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 23:53:40.825120  435691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 23:53:40.825194  435691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:53:40.830274  435691 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 23:53:40.830299  435691 start.go:475] detecting cgroup driver to use...
	I0108 23:53:40.830353  435691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:53:40.840845  435691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:53:40.849176  435691 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:53:40.849228  435691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:53:40.856805  435691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:53:40.864766  435691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:53:40.872405  435691 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:53:40.872478  435691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:53:40.954431  435691 docker.go:219] disabling docker service ...
	I0108 23:53:40.954503  435691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:53:40.966761  435691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:53:40.974707  435691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:53:41.053647  435691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:53:41.150797  435691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:53:41.159445  435691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:53:41.171806  435691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 23:53:41.171877  435691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:53:41.180682  435691 out.go:177] 
	W0108 23:53:41.182103  435691 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:53:41.182125  435691 out.go:239] * 
	* 
	W0108 23:53:41.183033  435691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:53:41.184534  435691 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-621247 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (306.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-003293 --alsologtostderr -v=3
E0109 00:01:38.404236  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:38.537228  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-003293 --alsologtostderr -v=3: exit status 82 (2m1.695175867s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-003293"  ...
	* Stopping node "old-k8s-version-003293"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:01:38.302504  450953 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:01:38.302666  450953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:38.302680  450953 out.go:309] Setting ErrFile to fd 2...
	I0109 00:01:38.302686  450953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:38.302899  450953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:01:38.303147  450953 out.go:303] Setting JSON to false
	I0109 00:01:38.303239  450953 mustload.go:65] Loading cluster: old-k8s-version-003293
	I0109 00:01:38.303722  450953 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:01:38.303832  450953 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/config.json ...
	I0109 00:01:38.304033  450953 mustload.go:65] Loading cluster: old-k8s-version-003293
	I0109 00:01:38.304198  450953 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:01:38.304248  450953 stop.go:39] StopHost: old-k8s-version-003293
	I0109 00:01:38.304709  450953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:01:38.304763  450953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:01:38.320060  450953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0109 00:01:38.320670  450953 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:01:38.321376  450953 main.go:141] libmachine: Using API Version  1
	I0109 00:01:38.321408  450953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:01:38.321874  450953 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:01:38.324751  450953 out.go:177] * Stopping node "old-k8s-version-003293"  ...
	I0109 00:01:38.326039  450953 main.go:141] libmachine: Stopping "old-k8s-version-003293"...
	I0109 00:01:38.326076  450953 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:01:38.327885  450953 main.go:141] libmachine: (old-k8s-version-003293) Calling .Stop
	I0109 00:01:38.331893  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 0/60
	I0109 00:01:39.334138  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 1/60
	I0109 00:01:40.336083  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 2/60
	I0109 00:01:41.338636  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 3/60
	I0109 00:01:42.339951  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 4/60
	I0109 00:01:43.342035  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 5/60
	I0109 00:01:44.343806  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 6/60
	I0109 00:01:45.345738  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 7/60
	I0109 00:01:46.347461  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 8/60
	I0109 00:01:47.349035  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 9/60
	I0109 00:01:48.350844  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 10/60
	I0109 00:01:49.353528  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 11/60
	I0109 00:01:50.355257  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 12/60
	I0109 00:01:51.356951  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 13/60
	I0109 00:01:52.358384  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 14/60
	I0109 00:01:53.360473  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 15/60
	I0109 00:01:54.362030  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 16/60
	I0109 00:01:55.363395  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 17/60
	I0109 00:01:56.364874  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 18/60
	I0109 00:01:57.366157  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 19/60
	I0109 00:01:58.368824  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 20/60
	I0109 00:01:59.370999  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 21/60
	I0109 00:02:00.372514  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 22/60
	I0109 00:02:01.373997  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 23/60
	I0109 00:02:02.375617  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 24/60
	I0109 00:02:03.377372  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 25/60
	I0109 00:02:04.379376  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 26/60
	I0109 00:02:05.381040  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 27/60
	I0109 00:02:06.382762  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 28/60
	I0109 00:02:07.383960  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 29/60
	I0109 00:02:08.385899  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 30/60
	I0109 00:02:09.387197  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 31/60
	I0109 00:02:10.388567  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 32/60
	I0109 00:02:11.389863  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 33/60
	I0109 00:02:12.391907  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 34/60
	I0109 00:02:13.393875  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 35/60
	I0109 00:02:14.395804  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 36/60
	I0109 00:02:15.398011  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 37/60
	I0109 00:02:16.399543  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 38/60
	I0109 00:02:17.401247  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 39/60
	I0109 00:02:18.402789  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 40/60
	I0109 00:02:19.404831  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 41/60
	I0109 00:02:20.406926  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 42/60
	I0109 00:02:21.408615  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 43/60
	I0109 00:02:22.410054  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 44/60
	I0109 00:02:23.411914  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 45/60
	I0109 00:02:24.413375  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 46/60
	I0109 00:02:25.414850  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 47/60
	I0109 00:02:26.416259  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 48/60
	I0109 00:02:27.418006  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 49/60
	I0109 00:02:28.420121  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 50/60
	I0109 00:02:29.421749  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 51/60
	I0109 00:02:30.423154  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 52/60
	I0109 00:02:31.424635  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 53/60
	I0109 00:02:32.425994  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 54/60
	I0109 00:02:33.427964  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 55/60
	I0109 00:02:34.429393  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 56/60
	I0109 00:02:35.430737  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 57/60
	I0109 00:02:36.432087  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 58/60
	I0109 00:02:37.433596  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 59/60
	I0109 00:02:38.434682  450953 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:02:38.434782  450953 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:02:38.434802  450953 retry.go:31] will retry after 1.346081792s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:02:39.782309  450953 stop.go:39] StopHost: old-k8s-version-003293
	I0109 00:02:39.782776  450953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:02:39.782852  450953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:02:39.797471  450953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0109 00:02:39.798013  450953 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:02:39.798518  450953 main.go:141] libmachine: Using API Version  1
	I0109 00:02:39.798560  450953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:02:39.799006  450953 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:02:39.801263  450953 out.go:177] * Stopping node "old-k8s-version-003293"  ...
	I0109 00:02:39.802634  450953 main.go:141] libmachine: Stopping "old-k8s-version-003293"...
	I0109 00:02:39.802650  450953 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:02:39.804471  450953 main.go:141] libmachine: (old-k8s-version-003293) Calling .Stop
	I0109 00:02:39.807797  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 0/60
	I0109 00:02:40.809215  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 1/60
	I0109 00:02:41.810654  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 2/60
	I0109 00:02:42.812122  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 3/60
	I0109 00:02:43.813861  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 4/60
	I0109 00:02:44.815694  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 5/60
	I0109 00:02:45.817803  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 6/60
	I0109 00:02:46.819294  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 7/60
	I0109 00:02:47.820770  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 8/60
	I0109 00:02:48.822303  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 9/60
	I0109 00:02:49.824271  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 10/60
	I0109 00:02:50.826023  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 11/60
	I0109 00:02:51.827422  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 12/60
	I0109 00:02:52.828996  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 13/60
	I0109 00:02:53.830241  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 14/60
	I0109 00:02:54.832491  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 15/60
	I0109 00:02:55.833865  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 16/60
	I0109 00:02:56.835780  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 17/60
	I0109 00:02:57.837226  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 18/60
	I0109 00:02:58.838628  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 19/60
	I0109 00:02:59.840720  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 20/60
	I0109 00:03:00.842233  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 21/60
	I0109 00:03:01.843943  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 22/60
	I0109 00:03:02.845222  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 23/60
	I0109 00:03:03.846832  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 24/60
	I0109 00:03:04.848825  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 25/60
	I0109 00:03:05.850308  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 26/60
	I0109 00:03:06.851842  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 27/60
	I0109 00:03:07.853314  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 28/60
	I0109 00:03:08.854787  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 29/60
	I0109 00:03:09.856555  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 30/60
	I0109 00:03:10.858075  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 31/60
	I0109 00:03:11.859822  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 32/60
	I0109 00:03:12.861229  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 33/60
	I0109 00:03:13.862901  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 34/60
	I0109 00:03:14.865062  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 35/60
	I0109 00:03:15.866634  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 36/60
	I0109 00:03:16.868257  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 37/60
	I0109 00:03:17.869712  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 38/60
	I0109 00:03:18.871233  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 39/60
	I0109 00:03:19.873230  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 40/60
	I0109 00:03:20.874611  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 41/60
	I0109 00:03:21.876200  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 42/60
	I0109 00:03:22.877730  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 43/60
	I0109 00:03:23.879508  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 44/60
	I0109 00:03:24.881391  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 45/60
	I0109 00:03:25.883031  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 46/60
	I0109 00:03:26.884477  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 47/60
	I0109 00:03:27.885925  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 48/60
	I0109 00:03:28.887597  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 49/60
	I0109 00:03:29.889509  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 50/60
	I0109 00:03:30.891222  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 51/60
	I0109 00:03:31.892618  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 52/60
	I0109 00:03:32.894253  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 53/60
	I0109 00:03:33.895640  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 54/60
	I0109 00:03:34.897647  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 55/60
	I0109 00:03:35.899152  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 56/60
	I0109 00:03:36.900933  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 57/60
	I0109 00:03:37.902271  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 58/60
	I0109 00:03:38.903777  450953 main.go:141] libmachine: (old-k8s-version-003293) Waiting for machine to stop 59/60
	I0109 00:03:39.904659  450953 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:03:39.904714  450953 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:03:39.906777  450953 out.go:177] 
	W0109 00:03:39.908222  450953 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0109 00:03:39.908251  450953 out.go:239] * 
	* 
	W0109 00:03:39.913277  450953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:03:39.914715  450953 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-003293 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293: exit status 3 (18.519604237s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:03:58.435746  451632 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host
	E0109 00:03:58.435770  451632 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-003293" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-845373 --alsologtostderr -v=3
E0109 00:01:40.325247  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:42.885482  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:48.005665  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:48.777505  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:58.246336  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-845373 --alsologtostderr -v=3: exit status 82 (2m0.879919394s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-845373"  ...
	* Stopping node "embed-certs-845373"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:01:40.206494  451030 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:01:40.206628  451030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:40.206637  451030 out.go:309] Setting ErrFile to fd 2...
	I0109 00:01:40.206642  451030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:40.206858  451030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:01:40.207138  451030 out.go:303] Setting JSON to false
	I0109 00:01:40.207231  451030 mustload.go:65] Loading cluster: embed-certs-845373
	I0109 00:01:40.207618  451030 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:01:40.207715  451030 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/config.json ...
	I0109 00:01:40.207884  451030 mustload.go:65] Loading cluster: embed-certs-845373
	I0109 00:01:40.208009  451030 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:01:40.208042  451030 stop.go:39] StopHost: embed-certs-845373
	I0109 00:01:40.208501  451030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:01:40.208554  451030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:01:40.224944  451030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I0109 00:01:40.225465  451030 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:01:40.226212  451030 main.go:141] libmachine: Using API Version  1
	I0109 00:01:40.226249  451030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:01:40.226665  451030 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:01:40.229466  451030 out.go:177] * Stopping node "embed-certs-845373"  ...
	I0109 00:01:40.230852  451030 main.go:141] libmachine: Stopping "embed-certs-845373"...
	I0109 00:01:40.230874  451030 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:01:40.232798  451030 main.go:141] libmachine: (embed-certs-845373) Calling .Stop
	I0109 00:01:40.236878  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 0/60
	I0109 00:01:41.238409  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 1/60
	I0109 00:01:42.240941  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 2/60
	I0109 00:01:43.242457  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 3/60
	I0109 00:01:44.243922  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 4/60
	I0109 00:01:45.245948  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 5/60
	I0109 00:01:46.247759  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 6/60
	I0109 00:01:47.250003  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 7/60
	I0109 00:01:48.251641  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 8/60
	I0109 00:01:49.253945  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 9/60
	I0109 00:01:50.255463  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 10/60
	I0109 00:01:51.257020  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 11/60
	I0109 00:01:52.258490  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 12/60
	I0109 00:01:53.259982  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 13/60
	I0109 00:01:54.261287  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 14/60
	I0109 00:01:55.263611  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 15/60
	I0109 00:01:56.265974  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 16/60
	I0109 00:01:57.268038  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 17/60
	I0109 00:01:58.269863  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 18/60
	I0109 00:01:59.271317  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 19/60
	I0109 00:02:00.273646  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 20/60
	I0109 00:02:01.275497  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 21/60
	I0109 00:02:02.277924  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 22/60
	I0109 00:02:03.280256  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 23/60
	I0109 00:02:04.281960  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 24/60
	I0109 00:02:05.283684  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 25/60
	I0109 00:02:06.286021  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 26/60
	I0109 00:02:07.287506  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 27/60
	I0109 00:02:08.288931  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 28/60
	I0109 00:02:09.290384  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 29/60
	I0109 00:02:10.292725  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 30/60
	I0109 00:02:11.295112  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 31/60
	I0109 00:02:12.297188  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 32/60
	I0109 00:02:13.298671  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 33/60
	I0109 00:02:14.300338  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 34/60
	I0109 00:02:15.302797  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 35/60
	I0109 00:02:16.304203  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 36/60
	I0109 00:02:17.305857  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 37/60
	I0109 00:02:18.307067  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 38/60
	I0109 00:02:19.308645  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 39/60
	I0109 00:02:20.310779  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 40/60
	I0109 00:02:21.312513  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 41/60
	I0109 00:02:22.313922  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 42/60
	I0109 00:02:23.315456  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 43/60
	I0109 00:02:24.316904  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 44/60
	I0109 00:02:25.318701  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 45/60
	I0109 00:02:26.320128  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 46/60
	I0109 00:02:27.321683  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 47/60
	I0109 00:02:28.322816  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 48/60
	I0109 00:02:29.324432  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 49/60
	I0109 00:02:30.326683  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 50/60
	I0109 00:02:31.328024  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 51/60
	I0109 00:02:32.329569  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 52/60
	I0109 00:02:33.331108  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 53/60
	I0109 00:02:34.332515  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 54/60
	I0109 00:02:35.334475  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 55/60
	I0109 00:02:36.335928  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 56/60
	I0109 00:02:37.337423  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 57/60
	I0109 00:02:38.338750  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 58/60
	I0109 00:02:39.340198  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 59/60
	I0109 00:02:40.340593  451030 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:02:40.340670  451030 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:02:40.340693  451030 retry.go:31] will retry after 553.467696ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:02:40.894338  451030 stop.go:39] StopHost: embed-certs-845373
	I0109 00:02:40.894752  451030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:02:40.894811  451030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:02:40.909243  451030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0109 00:02:40.909720  451030 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:02:40.910272  451030 main.go:141] libmachine: Using API Version  1
	I0109 00:02:40.910300  451030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:02:40.910670  451030 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:02:40.912846  451030 out.go:177] * Stopping node "embed-certs-845373"  ...
	I0109 00:02:40.914174  451030 main.go:141] libmachine: Stopping "embed-certs-845373"...
	I0109 00:02:40.914190  451030 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:02:40.915915  451030 main.go:141] libmachine: (embed-certs-845373) Calling .Stop
	I0109 00:02:40.919402  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 0/60
	I0109 00:02:41.920878  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 1/60
	I0109 00:02:42.922190  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 2/60
	I0109 00:02:43.923945  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 3/60
	I0109 00:02:44.926122  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 4/60
	I0109 00:02:45.928518  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 5/60
	I0109 00:02:46.930241  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 6/60
	I0109 00:02:47.931660  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 7/60
	I0109 00:02:48.933290  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 8/60
	I0109 00:02:49.934701  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 9/60
	I0109 00:02:50.936787  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 10/60
	I0109 00:02:51.938062  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 11/60
	I0109 00:02:52.940338  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 12/60
	I0109 00:02:53.941930  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 13/60
	I0109 00:02:54.943548  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 14/60
	I0109 00:02:55.945270  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 15/60
	I0109 00:02:56.946859  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 16/60
	I0109 00:02:57.948303  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 17/60
	I0109 00:02:58.949934  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 18/60
	I0109 00:02:59.951398  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 19/60
	I0109 00:03:00.953161  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 20/60
	I0109 00:03:01.954682  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 21/60
	I0109 00:03:02.956270  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 22/60
	I0109 00:03:03.957769  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 23/60
	I0109 00:03:04.959142  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 24/60
	I0109 00:03:05.960533  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 25/60
	I0109 00:03:06.961988  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 26/60
	I0109 00:03:07.963532  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 27/60
	I0109 00:03:08.965003  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 28/60
	I0109 00:03:09.966354  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 29/60
	I0109 00:03:10.968442  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 30/60
	I0109 00:03:11.969919  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 31/60
	I0109 00:03:12.971716  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 32/60
	I0109 00:03:13.973404  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 33/60
	I0109 00:03:14.974923  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 34/60
	I0109 00:03:15.976537  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 35/60
	I0109 00:03:16.977945  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 36/60
	I0109 00:03:17.979413  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 37/60
	I0109 00:03:18.981070  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 38/60
	I0109 00:03:19.982627  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 39/60
	I0109 00:03:20.984255  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 40/60
	I0109 00:03:21.985582  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 41/60
	I0109 00:03:22.987081  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 42/60
	I0109 00:03:23.988578  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 43/60
	I0109 00:03:24.989905  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 44/60
	I0109 00:03:25.991626  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 45/60
	I0109 00:03:26.993084  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 46/60
	I0109 00:03:27.994437  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 47/60
	I0109 00:03:28.996020  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 48/60
	I0109 00:03:29.997480  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 49/60
	I0109 00:03:30.999395  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 50/60
	I0109 00:03:32.000966  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 51/60
	I0109 00:03:33.002455  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 52/60
	I0109 00:03:34.004099  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 53/60
	I0109 00:03:35.005881  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 54/60
	I0109 00:03:36.007376  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 55/60
	I0109 00:03:37.008984  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 56/60
	I0109 00:03:38.010428  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 57/60
	I0109 00:03:39.011917  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 58/60
	I0109 00:03:40.013574  451030 main.go:141] libmachine: (embed-certs-845373) Waiting for machine to stop 59/60
	I0109 00:03:41.015126  451030 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:03:41.015181  451030 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:03:41.017719  451030 out.go:177] 
	W0109 00:03:41.019339  451030 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0109 00:03:41.019373  451030 out.go:239] * 
	* 
	W0109 00:03:41.022631  451030 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:03:41.023937  451030 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-845373 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373
E0109 00:03:41.133119  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:42.144389  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:03:46.253819  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:56.494608  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373: exit status 3 (18.69049258s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:03:59.715695  451667 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host
	E0109 00:03:59.715715  451667 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-845373" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-378213 --alsologtostderr -v=3
E0109 00:02:18.727493  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:02:20.222347  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.227672  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.237963  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.258234  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.299280  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.380042  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.540300  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:20.860772  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:21.500982  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:22.781712  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:25.342057  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:30.462549  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:02:40.703325  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-378213 --alsologtostderr -v=3: exit status 82 (2m1.029995004s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-378213"  ...
	* Stopping node "no-preload-378213"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:02:13.224093  451264 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:02:13.224359  451264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:02:13.224370  451264 out.go:309] Setting ErrFile to fd 2...
	I0109 00:02:13.224377  451264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:02:13.224593  451264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:02:13.224847  451264 out.go:303] Setting JSON to false
	I0109 00:02:13.224959  451264 mustload.go:65] Loading cluster: no-preload-378213
	I0109 00:02:13.225331  451264 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:02:13.225445  451264 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/config.json ...
	I0109 00:02:13.225643  451264 mustload.go:65] Loading cluster: no-preload-378213
	I0109 00:02:13.225761  451264 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:02:13.225785  451264 stop.go:39] StopHost: no-preload-378213
	I0109 00:02:13.226340  451264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:02:13.226420  451264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:02:13.241868  451264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43959
	I0109 00:02:13.242333  451264 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:02:13.242995  451264 main.go:141] libmachine: Using API Version  1
	I0109 00:02:13.243028  451264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:02:13.243484  451264 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:02:13.246162  451264 out.go:177] * Stopping node "no-preload-378213"  ...
	I0109 00:02:13.247841  451264 main.go:141] libmachine: Stopping "no-preload-378213"...
	I0109 00:02:13.247869  451264 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:02:13.250074  451264 main.go:141] libmachine: (no-preload-378213) Calling .Stop
	I0109 00:02:13.253934  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 0/60
	I0109 00:02:14.255391  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 1/60
	I0109 00:02:15.256955  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 2/60
	I0109 00:02:16.258571  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 3/60
	I0109 00:02:17.260115  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 4/60
	I0109 00:02:18.262206  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 5/60
	I0109 00:02:19.263813  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 6/60
	I0109 00:02:20.265932  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 7/60
	I0109 00:02:21.267382  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 8/60
	I0109 00:02:22.268944  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 9/60
	I0109 00:02:23.270498  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 10/60
	I0109 00:02:24.271929  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 11/60
	I0109 00:02:25.274024  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 12/60
	I0109 00:02:26.275596  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 13/60
	I0109 00:02:27.277022  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 14/60
	I0109 00:02:28.279101  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 15/60
	I0109 00:02:29.281128  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 16/60
	I0109 00:02:30.282421  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 17/60
	I0109 00:02:31.283946  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 18/60
	I0109 00:02:32.285429  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 19/60
	I0109 00:02:33.287581  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 20/60
	I0109 00:02:34.289792  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 21/60
	I0109 00:02:35.291155  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 22/60
	I0109 00:02:36.292723  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 23/60
	I0109 00:02:37.294182  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 24/60
	I0109 00:02:38.296099  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 25/60
	I0109 00:02:39.298008  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 26/60
	I0109 00:02:40.299431  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 27/60
	I0109 00:02:41.300924  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 28/60
	I0109 00:02:42.302301  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 29/60
	I0109 00:02:43.304793  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 30/60
	I0109 00:02:44.306482  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 31/60
	I0109 00:02:45.308128  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 32/60
	I0109 00:02:46.309985  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 33/60
	I0109 00:02:47.311586  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 34/60
	I0109 00:02:48.313809  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 35/60
	I0109 00:02:49.315516  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 36/60
	I0109 00:02:50.317131  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 37/60
	I0109 00:02:51.318671  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 38/60
	I0109 00:02:52.320421  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 39/60
	I0109 00:02:53.322653  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 40/60
	I0109 00:02:54.324280  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 41/60
	I0109 00:02:55.325920  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 42/60
	I0109 00:02:56.327479  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 43/60
	I0109 00:02:57.329198  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 44/60
	I0109 00:02:58.331160  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 45/60
	I0109 00:02:59.332725  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 46/60
	I0109 00:03:00.334150  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 47/60
	I0109 00:03:01.335619  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 48/60
	I0109 00:03:02.337119  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 49/60
	I0109 00:03:03.339063  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 50/60
	I0109 00:03:04.340569  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 51/60
	I0109 00:03:05.341881  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 52/60
	I0109 00:03:06.343386  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 53/60
	I0109 00:03:07.344956  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 54/60
	I0109 00:03:08.347053  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 55/60
	I0109 00:03:09.348607  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 56/60
	I0109 00:03:10.350329  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 57/60
	I0109 00:03:11.351687  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 58/60
	I0109 00:03:12.353134  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 59/60
	I0109 00:03:13.354477  451264 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:03:13.354531  451264 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:03:13.354554  451264 retry.go:31] will retry after 709.946674ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:03:14.065540  451264 stop.go:39] StopHost: no-preload-378213
	I0109 00:03:14.065970  451264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:03:14.066026  451264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:03:14.080649  451264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0109 00:03:14.081099  451264 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:03:14.081577  451264 main.go:141] libmachine: Using API Version  1
	I0109 00:03:14.081599  451264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:03:14.082014  451264 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:03:14.084360  451264 out.go:177] * Stopping node "no-preload-378213"  ...
	I0109 00:03:14.085764  451264 main.go:141] libmachine: Stopping "no-preload-378213"...
	I0109 00:03:14.085781  451264 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:03:14.087397  451264 main.go:141] libmachine: (no-preload-378213) Calling .Stop
	I0109 00:03:14.090843  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 0/60
	I0109 00:03:15.092261  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 1/60
	I0109 00:03:16.093825  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 2/60
	I0109 00:03:17.095241  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 3/60
	I0109 00:03:18.096715  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 4/60
	I0109 00:03:19.098486  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 5/60
	I0109 00:03:20.100064  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 6/60
	I0109 00:03:21.101490  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 7/60
	I0109 00:03:22.102978  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 8/60
	I0109 00:03:23.104502  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 9/60
	I0109 00:03:24.106810  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 10/60
	I0109 00:03:25.108237  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 11/60
	I0109 00:03:26.109723  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 12/60
	I0109 00:03:27.111076  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 13/60
	I0109 00:03:28.112462  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 14/60
	I0109 00:03:29.113987  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 15/60
	I0109 00:03:30.115526  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 16/60
	I0109 00:03:31.116932  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 17/60
	I0109 00:03:32.118445  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 18/60
	I0109 00:03:33.120109  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 19/60
	I0109 00:03:34.122019  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 20/60
	I0109 00:03:35.123541  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 21/60
	I0109 00:03:36.124994  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 22/60
	I0109 00:03:37.126502  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 23/60
	I0109 00:03:38.127977  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 24/60
	I0109 00:03:39.130448  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 25/60
	I0109 00:03:40.131991  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 26/60
	I0109 00:03:41.133581  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 27/60
	I0109 00:03:42.135063  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 28/60
	I0109 00:03:43.136413  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 29/60
	I0109 00:03:44.138481  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 30/60
	I0109 00:03:45.139830  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 31/60
	I0109 00:03:46.141266  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 32/60
	I0109 00:03:47.142580  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 33/60
	I0109 00:03:48.144310  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 34/60
	I0109 00:03:49.145961  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 35/60
	I0109 00:03:50.148000  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 36/60
	I0109 00:03:51.149958  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 37/60
	I0109 00:03:52.151421  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 38/60
	I0109 00:03:53.152842  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 39/60
	I0109 00:03:54.154504  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 40/60
	I0109 00:03:55.155906  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 41/60
	I0109 00:03:56.157891  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 42/60
	I0109 00:03:57.159343  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 43/60
	I0109 00:03:58.160877  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 44/60
	I0109 00:03:59.162564  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 45/60
	I0109 00:04:00.164104  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 46/60
	I0109 00:04:01.165760  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 47/60
	I0109 00:04:02.167394  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 48/60
	I0109 00:04:03.168945  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 49/60
	I0109 00:04:04.170871  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 50/60
	I0109 00:04:05.172435  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 51/60
	I0109 00:04:06.173862  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 52/60
	I0109 00:04:07.175226  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 53/60
	I0109 00:04:08.176597  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 54/60
	I0109 00:04:09.177735  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 55/60
	I0109 00:04:10.179015  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 56/60
	I0109 00:04:11.180521  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 57/60
	I0109 00:04:12.181886  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 58/60
	I0109 00:04:13.183432  451264 main.go:141] libmachine: (no-preload-378213) Waiting for machine to stop 59/60
	I0109 00:04:14.184212  451264 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:04:14.184267  451264 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:04:14.186337  451264 out.go:177] 
	W0109 00:04:14.187890  451264 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0109 00:04:14.187903  451264 out.go:239] * 
	* 
	W0109 00:04:14.191012  451264 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:04:14.192543  451264 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-378213 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213
E0109 00:04:16.975678  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:04:19.628045  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:04:21.608679  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:04:22.326979  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.332254  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.342615  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.362897  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.403215  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.483611  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.644095  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:22.965200  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:23.605477  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:24.886377  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:27.447098  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:04:32.567327  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213: exit status 3 (18.549853288s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:32.743780  452018 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host
	E0109 00:04:32.743798  452018 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-378213" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-834116 --alsologtostderr -v=3
E0109 00:02:59.687755  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:03:01.183778  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:03:36.012728  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.018011  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.028272  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.048567  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.088901  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.169338  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.330195  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:36.650857  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:37.291887  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:03:38.572748  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-834116 --alsologtostderr -v=3: exit status 82 (2m1.717284273s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-834116"  ...
	* Stopping node "default-k8s-diff-port-834116"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:02:55.128835  451490 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:02:55.129199  451490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:02:55.129213  451490 out.go:309] Setting ErrFile to fd 2...
	I0109 00:02:55.129220  451490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:02:55.129540  451490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:02:55.129844  451490 out.go:303] Setting JSON to false
	I0109 00:02:55.129977  451490 mustload.go:65] Loading cluster: default-k8s-diff-port-834116
	I0109 00:02:55.130484  451490 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:02:55.130618  451490 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:02:55.130818  451490 mustload.go:65] Loading cluster: default-k8s-diff-port-834116
	I0109 00:02:55.130975  451490 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:02:55.131031  451490 stop.go:39] StopHost: default-k8s-diff-port-834116
	I0109 00:02:55.131556  451490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:02:55.131631  451490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:02:55.146767  451490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0109 00:02:55.147266  451490 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:02:55.147850  451490 main.go:141] libmachine: Using API Version  1
	I0109 00:02:55.147876  451490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:02:55.148304  451490 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:02:55.150994  451490 out.go:177] * Stopping node "default-k8s-diff-port-834116"  ...
	I0109 00:02:55.152736  451490 main.go:141] libmachine: Stopping "default-k8s-diff-port-834116"...
	I0109 00:02:55.152764  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:02:55.154561  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Stop
	I0109 00:02:55.158850  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 0/60
	I0109 00:02:56.160372  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 1/60
	I0109 00:02:57.161858  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 2/60
	I0109 00:02:58.164027  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 3/60
	I0109 00:02:59.165644  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 4/60
	I0109 00:03:00.167967  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 5/60
	I0109 00:03:01.169476  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 6/60
	I0109 00:03:02.170988  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 7/60
	I0109 00:03:03.172644  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 8/60
	I0109 00:03:04.174109  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 9/60
	I0109 00:03:05.176345  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 10/60
	I0109 00:03:06.178116  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 11/60
	I0109 00:03:07.179611  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 12/60
	I0109 00:03:08.181187  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 13/60
	I0109 00:03:09.182514  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 14/60
	I0109 00:03:10.184713  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 15/60
	I0109 00:03:11.186224  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 16/60
	I0109 00:03:12.187787  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 17/60
	I0109 00:03:13.190079  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 18/60
	I0109 00:03:14.191558  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 19/60
	I0109 00:03:15.193878  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 20/60
	I0109 00:03:16.195197  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 21/60
	I0109 00:03:17.196704  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 22/60
	I0109 00:03:18.198181  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 23/60
	I0109 00:03:19.199816  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 24/60
	I0109 00:03:20.202053  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 25/60
	I0109 00:03:21.203339  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 26/60
	I0109 00:03:22.204845  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 27/60
	I0109 00:03:23.206332  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 28/60
	I0109 00:03:24.207892  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 29/60
	I0109 00:03:25.210272  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 30/60
	I0109 00:03:26.211962  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 31/60
	I0109 00:03:27.213404  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 32/60
	I0109 00:03:28.214999  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 33/60
	I0109 00:03:29.216476  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 34/60
	I0109 00:03:30.218542  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 35/60
	I0109 00:03:31.219908  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 36/60
	I0109 00:03:32.221442  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 37/60
	I0109 00:03:33.222857  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 38/60
	I0109 00:03:34.224452  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 39/60
	I0109 00:03:35.227104  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 40/60
	I0109 00:03:36.228781  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 41/60
	I0109 00:03:37.230519  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 42/60
	I0109 00:03:38.231862  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 43/60
	I0109 00:03:39.233518  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 44/60
	I0109 00:03:40.235473  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 45/60
	I0109 00:03:41.236887  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 46/60
	I0109 00:03:42.238301  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 47/60
	I0109 00:03:43.239898  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 48/60
	I0109 00:03:44.241463  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 49/60
	I0109 00:03:45.244136  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 50/60
	I0109 00:03:46.245499  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 51/60
	I0109 00:03:47.247000  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 52/60
	I0109 00:03:48.248359  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 53/60
	I0109 00:03:49.250302  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 54/60
	I0109 00:03:50.252522  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 55/60
	I0109 00:03:51.253965  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 56/60
	I0109 00:03:52.255469  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 57/60
	I0109 00:03:53.256833  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 58/60
	I0109 00:03:54.258374  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 59/60
	I0109 00:03:55.259832  451490 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:03:55.259938  451490 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:03:55.259968  451490 retry.go:31] will retry after 1.389705815s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:03:56.650524  451490 stop.go:39] StopHost: default-k8s-diff-port-834116
	I0109 00:03:56.650916  451490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:03:56.650965  451490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:03:56.666723  451490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0109 00:03:56.667182  451490 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:03:56.667752  451490 main.go:141] libmachine: Using API Version  1
	I0109 00:03:56.667789  451490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:03:56.668181  451490 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:03:56.670425  451490 out.go:177] * Stopping node "default-k8s-diff-port-834116"  ...
	I0109 00:03:56.671757  451490 main.go:141] libmachine: Stopping "default-k8s-diff-port-834116"...
	I0109 00:03:56.671784  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:03:56.673490  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Stop
	I0109 00:03:56.677014  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 0/60
	I0109 00:03:57.678537  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 1/60
	I0109 00:03:58.680131  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 2/60
	I0109 00:03:59.681855  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 3/60
	I0109 00:04:00.683581  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 4/60
	I0109 00:04:01.685403  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 5/60
	I0109 00:04:02.686675  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 6/60
	I0109 00:04:03.688343  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 7/60
	I0109 00:04:04.689969  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 8/60
	I0109 00:04:05.691424  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 9/60
	I0109 00:04:06.693903  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 10/60
	I0109 00:04:07.695573  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 11/60
	I0109 00:04:08.697155  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 12/60
	I0109 00:04:09.698711  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 13/60
	I0109 00:04:10.700201  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 14/60
	I0109 00:04:11.702330  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 15/60
	I0109 00:04:12.704398  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 16/60
	I0109 00:04:13.705760  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 17/60
	I0109 00:04:14.707505  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 18/60
	I0109 00:04:15.708958  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 19/60
	I0109 00:04:16.710809  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 20/60
	I0109 00:04:17.712199  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 21/60
	I0109 00:04:18.713482  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 22/60
	I0109 00:04:19.714903  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 23/60
	I0109 00:04:20.716333  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 24/60
	I0109 00:04:21.718114  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 25/60
	I0109 00:04:22.719624  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 26/60
	I0109 00:04:23.720984  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 27/60
	I0109 00:04:24.722605  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 28/60
	I0109 00:04:25.723935  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 29/60
	I0109 00:04:26.725972  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 30/60
	I0109 00:04:27.727475  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 31/60
	I0109 00:04:28.728949  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 32/60
	I0109 00:04:29.730336  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 33/60
	I0109 00:04:30.731827  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 34/60
	I0109 00:04:31.734090  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 35/60
	I0109 00:04:32.735475  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 36/60
	I0109 00:04:33.736912  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 37/60
	I0109 00:04:34.738520  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 38/60
	I0109 00:04:35.739920  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 39/60
	I0109 00:04:36.741804  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 40/60
	I0109 00:04:37.743177  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 41/60
	I0109 00:04:38.744542  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 42/60
	I0109 00:04:39.746089  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 43/60
	I0109 00:04:40.747837  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 44/60
	I0109 00:04:41.749864  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 45/60
	I0109 00:04:42.751543  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 46/60
	I0109 00:04:43.753347  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 47/60
	I0109 00:04:44.755202  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 48/60
	I0109 00:04:45.756891  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 49/60
	I0109 00:04:46.758760  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 50/60
	I0109 00:04:47.760286  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 51/60
	I0109 00:04:48.761871  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 52/60
	I0109 00:04:49.763463  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 53/60
	I0109 00:04:50.764942  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 54/60
	I0109 00:04:51.766861  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 55/60
	I0109 00:04:52.768323  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 56/60
	I0109 00:04:53.769937  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 57/60
	I0109 00:04:54.771704  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 58/60
	I0109 00:04:55.773540  451490 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for machine to stop 59/60
	I0109 00:04:56.774711  451490 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0109 00:04:56.774771  451490 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0109 00:04:56.776868  451490 out.go:177] 
	W0109 00:04:56.778215  451490 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0109 00:04:56.778231  451490 out.go:239] * 
	* 
	W0109 00:04:56.781465  451490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:04:56.783060  451490 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-834116 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
E0109 00:04:57.936256  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:05:00.597516  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:05:03.288565  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:05:04.065228  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116: exit status 3 (18.450432963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:05:15.235718  452294 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	E0109 00:05:15.235746  452294 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-834116" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293: exit status 3 (3.200048583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:01.635840  451753 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host
	E0109 00:04:01.635865  451753 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-003293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0109 00:04:02.675494  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-003293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154523807s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-003293 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293: exit status 3 (3.061528881s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:10.851792  451871 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host
	E0109 00:04:10.851818  451871 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.81:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-003293" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373: exit status 3 (3.199808442s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:02.915762  451783 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host
	E0109 00:04:02.915787  451783 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154192527s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373: exit status 3 (3.061641146s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:12.131868  451901 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host
	E0109 00:04:12.131897  451901 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-845373" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213: exit status 3 (3.195607184s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:35.939828  452095 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host
	E0109 00:04:35.939856  452095 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-378213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0109 00:04:40.116673  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.121939  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.132274  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.152573  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.192885  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.273319  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.433801  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:40.754122  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:41.395184  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-378213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154969077s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-378213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213
E0109 00:04:42.675916  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:42.807871  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213: exit status 3 (3.060973736s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:04:45.155764  452182 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host
	E0109 00:04:45.155798  452182 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.62:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-378213" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116: exit status 3 (3.199673083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:05:18.435787  452384 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	E0109 00:05:18.435809  452384 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-834116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0109 00:05:21.078088  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-834116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155367299s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-834116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116: exit status 3 (3.060348989s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:05:27.651891  452447 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	E0109 00:05:27.651928  452447 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-834116" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:15:28.030160  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:15:49.610449  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0109 00:16:13.677249  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0109 00:16:28.295102  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:16:37.766278  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:23:44.293727774 +0000 UTC m=+5523.170677993
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-834116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-834116 logs -n 25: (1.806501946s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo cat                              | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:05:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:05:27.711531  452488 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:05:27.711728  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711742  452488 out.go:309] Setting ErrFile to fd 2...
	I0109 00:05:27.711750  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711982  452488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:05:27.712562  452488 out.go:303] Setting JSON to false
	I0109 00:05:27.713635  452488 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17254,"bootTime":1704741474,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:05:27.713709  452488 start.go:138] virtualization: kvm guest
	I0109 00:05:27.716110  452488 out.go:177] * [default-k8s-diff-port-834116] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:05:27.718021  452488 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:05:27.719311  452488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:05:27.718049  452488 notify.go:220] Checking for updates...
	I0109 00:05:27.720754  452488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:05:27.722073  452488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:05:27.723496  452488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:05:27.724923  452488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:05:27.726663  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:05:27.727158  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.727261  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.741812  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0109 00:05:27.742300  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.742911  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.742943  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.743249  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.743438  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.743694  452488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:05:27.743987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.744027  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.758231  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I0109 00:05:27.758620  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.759039  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.759069  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.759349  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.759570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.797687  452488 out.go:177] * Using the kvm2 driver based on existing profile
	I0109 00:05:27.799282  452488 start.go:298] selected driver: kvm2
	I0109 00:05:27.799301  452488 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.799485  452488 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:05:27.800156  452488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.800240  452488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:05:27.815851  452488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:05:27.816303  452488 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:05:27.816371  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:05:27.816384  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:05:27.816406  452488 start_flags.go:323] config:
	{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-83411
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.816592  452488 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.818643  452488 out.go:177] * Starting control plane node default-k8s-diff-port-834116 in cluster default-k8s-diff-port-834116
	I0109 00:05:30.179677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:27.820207  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:05:27.820246  452488 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0109 00:05:27.820258  452488 cache.go:56] Caching tarball of preloaded images
	I0109 00:05:27.820344  452488 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:05:27.820354  452488 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:05:27.820455  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:05:27.820632  452488 start.go:365] acquiring machines lock for default-k8s-diff-port-834116: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:05:33.251703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:39.331707  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:42.403645  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:48.483635  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:51.555692  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:57.635653  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:00.707722  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:06.787696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:09.859664  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:15.939733  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:19.011687  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:25.091759  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:28.163666  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:34.243673  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:37.315693  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:43.395652  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:46.467622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:52.547639  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:55.619655  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:01.699734  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:04.771686  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:10.851703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:13.923711  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:20.003883  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:23.075726  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:29.155735  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:32.227698  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:38.307696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:41.379724  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:47.459727  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:50.531708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:56.611621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:59.683677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:05.763622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:08.835708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:14.915674  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:17.987706  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:24.067730  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:27.139621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:33.219667  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:36.291651  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:42.371678  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:45.443660  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:48.448024  451984 start.go:369] acquired machines lock for "embed-certs-845373" in 4m36.156097213s
	I0109 00:08:48.448197  451984 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:08:48.448239  451984 fix.go:54] fixHost starting: 
	I0109 00:08:48.448769  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:08:48.448810  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:08:48.464359  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0109 00:08:48.465014  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:08:48.465634  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:08:48.465669  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:08:48.466022  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:08:48.466241  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:08:48.466431  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:08:48.468132  451984 fix.go:102] recreateIfNeeded on embed-certs-845373: state=Stopped err=<nil>
	I0109 00:08:48.468162  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	W0109 00:08:48.468339  451984 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:08:48.470346  451984 out.go:177] * Restarting existing kvm2 VM for "embed-certs-845373" ...
	I0109 00:08:48.445374  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:08:48.445415  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:08:48.447757  451943 machine.go:91] provisioned docker machine in 4m37.407825673s
	I0109 00:08:48.447823  451943 fix.go:56] fixHost completed within 4m37.428599196s
	I0109 00:08:48.447831  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 4m37.428619873s
	W0109 00:08:48.447876  451943 start.go:694] error starting host: provision: host is not running
	W0109 00:08:48.448289  451943 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0109 00:08:48.448305  451943 start.go:709] Will try again in 5 seconds ...
	I0109 00:08:48.471819  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Start
	I0109 00:08:48.471966  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring networks are active...
	I0109 00:08:48.472753  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network default is active
	I0109 00:08:48.473111  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network mk-embed-certs-845373 is active
	I0109 00:08:48.473441  451984 main.go:141] libmachine: (embed-certs-845373) Getting domain xml...
	I0109 00:08:48.474114  451984 main.go:141] libmachine: (embed-certs-845373) Creating domain...
	I0109 00:08:49.716628  451984 main.go:141] libmachine: (embed-certs-845373) Waiting to get IP...
	I0109 00:08:49.717606  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.718022  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.718080  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.717994  452995 retry.go:31] will retry after 247.787821ms: waiting for machine to come up
	I0109 00:08:49.967655  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.968169  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.968203  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.968101  452995 retry.go:31] will retry after 339.65094ms: waiting for machine to come up
	I0109 00:08:50.309542  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.310008  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.310041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.309944  452995 retry.go:31] will retry after 475.654088ms: waiting for machine to come up
	I0109 00:08:50.787560  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.787930  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.787973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.787876  452995 retry.go:31] will retry after 437.198744ms: waiting for machine to come up
	I0109 00:08:51.226414  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.226866  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.226901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.226817  452995 retry.go:31] will retry after 501.606265ms: waiting for machine to come up
	I0109 00:08:51.730571  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.731041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.731084  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.730949  452995 retry.go:31] will retry after 707.547375ms: waiting for machine to come up
	I0109 00:08:53.450389  451943 start.go:365] acquiring machines lock for old-k8s-version-003293: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:08:52.440038  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:52.440373  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:52.440434  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:52.440330  452995 retry.go:31] will retry after 1.02016439s: waiting for machine to come up
	I0109 00:08:53.462628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:53.463090  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:53.463120  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:53.463037  452995 retry.go:31] will retry after 1.322196175s: waiting for machine to come up
	I0109 00:08:54.786979  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:54.787514  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:54.787540  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:54.787465  452995 retry.go:31] will retry after 1.260135214s: waiting for machine to come up
	I0109 00:08:56.049973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:56.050450  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:56.050478  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:56.050415  452995 retry.go:31] will retry after 1.476819521s: waiting for machine to come up
	I0109 00:08:57.529060  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:57.529497  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:57.529527  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:57.529444  452995 retry.go:31] will retry after 2.830903204s: waiting for machine to come up
	I0109 00:09:00.362901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:00.363333  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:00.363372  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:00.363292  452995 retry.go:31] will retry after 3.093040214s: waiting for machine to come up
	I0109 00:09:03.460541  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:03.461066  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:03.461103  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:03.461032  452995 retry.go:31] will retry after 3.190401984s: waiting for machine to come up
	I0109 00:09:06.654729  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655295  451984 main.go:141] libmachine: (embed-certs-845373) Found IP for machine: 192.168.50.132
	I0109 00:09:06.655331  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has current primary IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655343  451984 main.go:141] libmachine: (embed-certs-845373) Reserving static IP address...
	I0109 00:09:06.655828  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.655851  451984 main.go:141] libmachine: (embed-certs-845373) DBG | skip adding static IP to network mk-embed-certs-845373 - found existing host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"}
	I0109 00:09:06.655865  451984 main.go:141] libmachine: (embed-certs-845373) Reserved static IP address: 192.168.50.132
	I0109 00:09:06.655880  451984 main.go:141] libmachine: (embed-certs-845373) Waiting for SSH to be available...
	I0109 00:09:06.655969  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Getting to WaitForSSH function...
	I0109 00:09:06.658083  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658468  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.658501  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658615  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH client type: external
	I0109 00:09:06.658650  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa (-rw-------)
	I0109 00:09:06.658704  451984 main.go:141] libmachine: (embed-certs-845373) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:06.658725  451984 main.go:141] libmachine: (embed-certs-845373) DBG | About to run SSH command:
	I0109 00:09:06.658741  451984 main.go:141] libmachine: (embed-certs-845373) DBG | exit 0
	I0109 00:09:06.751337  451984 main.go:141] libmachine: (embed-certs-845373) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:06.751683  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetConfigRaw
	I0109 00:09:06.752338  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:06.754749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755133  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.755161  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755475  451984 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/config.json ...
	I0109 00:09:06.755689  451984 machine.go:88] provisioning docker machine ...
	I0109 00:09:06.755710  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:06.755939  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756108  451984 buildroot.go:166] provisioning hostname "embed-certs-845373"
	I0109 00:09:06.756133  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756287  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.758391  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758651  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.758678  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758821  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.759026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759151  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759276  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.759419  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.759891  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.759906  451984 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-845373 && echo "embed-certs-845373" | sudo tee /etc/hostname
	I0109 00:09:06.897829  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845373
	
	I0109 00:09:06.897862  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.900776  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901151  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.901194  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901354  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.901601  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901767  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.902093  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.902429  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.902457  451984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-845373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-845373/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-845373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:07.035051  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:07.035088  451984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:07.035106  451984 buildroot.go:174] setting up certificates
	I0109 00:09:07.035141  451984 provision.go:83] configureAuth start
	I0109 00:09:07.035150  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:07.035470  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.038830  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039185  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.039216  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039473  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.041628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.041978  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.042006  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.042138  451984 provision.go:138] copyHostCerts
	I0109 00:09:07.042215  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:07.042235  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:07.042301  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:07.042386  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:07.042394  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:07.042420  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:07.042547  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:07.042557  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:07.042582  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:07.042658  451984 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.embed-certs-845373 san=[192.168.50.132 192.168.50.132 localhost 127.0.0.1 minikube embed-certs-845373]
	I0109 00:09:07.146928  451984 provision.go:172] copyRemoteCerts
	I0109 00:09:07.147000  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:07.147026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.149665  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.149999  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.150025  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.150190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.150402  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.150624  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.150778  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.912619  452237 start.go:369] acquired machines lock for "no-preload-378213" in 4m22.586847609s
	I0109 00:09:07.912695  452237 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:07.912705  452237 fix.go:54] fixHost starting: 
	I0109 00:09:07.913160  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:07.913205  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:07.929558  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0109 00:09:07.930071  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:07.930620  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:09:07.930646  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:07.931015  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:07.931232  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:07.931421  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:09:07.933075  452237 fix.go:102] recreateIfNeeded on no-preload-378213: state=Stopped err=<nil>
	I0109 00:09:07.933114  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	W0109 00:09:07.933281  452237 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:07.935418  452237 out.go:177] * Restarting existing kvm2 VM for "no-preload-378213" ...
	I0109 00:09:07.246432  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:07.270463  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0109 00:09:07.294094  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:09:07.317414  451984 provision.go:86] duration metric: configureAuth took 282.256583ms
	I0109 00:09:07.317462  451984 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:07.317651  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:07.317743  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.320246  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320529  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.320557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320724  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.320930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321068  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321199  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.321480  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.321807  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.321831  451984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:07.649960  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:07.649991  451984 machine.go:91] provisioned docker machine in 894.285072ms
	I0109 00:09:07.650005  451984 start.go:300] post-start starting for "embed-certs-845373" (driver="kvm2")
	I0109 00:09:07.650020  451984 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:07.650052  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.650505  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:07.650537  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.653343  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653671  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.653695  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653913  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.654147  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.654345  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.654548  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.745211  451984 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:07.749547  451984 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:07.749608  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:07.749694  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:07.749790  451984 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:07.749906  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:07.758232  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:07.781504  451984 start.go:303] post-start completed in 131.476813ms
	I0109 00:09:07.781532  451984 fix.go:56] fixHost completed within 19.333293059s
	I0109 00:09:07.781556  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.784365  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.784751  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.784774  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.785021  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.785267  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785430  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785570  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.785745  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.786073  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.786085  451984 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:07.912423  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758947.859859847
	
	I0109 00:09:07.912452  451984 fix.go:206] guest clock: 1704758947.859859847
	I0109 00:09:07.912462  451984 fix.go:219] Guest: 2024-01-09 00:09:07.859859847 +0000 UTC Remote: 2024-01-09 00:09:07.781536446 +0000 UTC m=+295.641408793 (delta=78.323401ms)
	I0109 00:09:07.912487  451984 fix.go:190] guest clock delta is within tolerance: 78.323401ms
	I0109 00:09:07.912494  451984 start.go:83] releasing machines lock for "embed-certs-845373", held for 19.464424699s
	I0109 00:09:07.912529  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.912827  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.915749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916146  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.916177  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916358  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.916865  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917042  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917155  451984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:07.917208  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.917263  451984 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:07.917288  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.920121  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920158  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920573  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920608  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920626  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920648  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920703  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920858  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920942  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921034  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921122  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921185  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921263  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.921282  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:08.040953  451984 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:08.046882  451984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:08.204801  451984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:08.214653  451984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:08.214741  451984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:08.232714  451984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:08.232750  451984 start.go:475] detecting cgroup driver to use...
	I0109 00:09:08.232881  451984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:08.254408  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:08.266926  451984 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:08.267015  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:08.278971  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:08.291982  451984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:08.395029  451984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:08.514444  451984 docker.go:219] disabling docker service ...
	I0109 00:09:08.514527  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:08.528548  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:08.540899  451984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:08.669118  451984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:08.776487  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:08.791617  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:08.809437  451984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:08.809509  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.818817  451984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:08.818891  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.828374  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.839820  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.849449  451984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:08.858899  451984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:08.869295  451984 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:08.869377  451984 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:08.885387  451984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:08.895106  451984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:09.007897  451984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:09.197656  451984 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:09.197737  451984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:09.203174  451984 start.go:543] Will wait 60s for crictl version
	I0109 00:09:09.203264  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:09:09.207312  451984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:09.245917  451984 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:09.245996  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.296410  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.345334  451984 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:07.937023  452237 main.go:141] libmachine: (no-preload-378213) Calling .Start
	I0109 00:09:07.937229  452237 main.go:141] libmachine: (no-preload-378213) Ensuring networks are active...
	I0109 00:09:07.938093  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network default is active
	I0109 00:09:07.938504  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network mk-no-preload-378213 is active
	I0109 00:09:07.938868  452237 main.go:141] libmachine: (no-preload-378213) Getting domain xml...
	I0109 00:09:07.939609  452237 main.go:141] libmachine: (no-preload-378213) Creating domain...
	I0109 00:09:09.254019  452237 main.go:141] libmachine: (no-preload-378213) Waiting to get IP...
	I0109 00:09:09.254967  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.255375  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.255465  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.255333  453115 retry.go:31] will retry after 260.636384ms: waiting for machine to come up
	I0109 00:09:09.518054  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.518563  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.518590  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.518522  453115 retry.go:31] will retry after 320.770806ms: waiting for machine to come up
	I0109 00:09:09.841203  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.841675  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.841710  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.841604  453115 retry.go:31] will retry after 317.226014ms: waiting for machine to come up
	I0109 00:09:10.160137  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.160545  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.160576  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.160522  453115 retry.go:31] will retry after 452.723717ms: waiting for machine to come up
	I0109 00:09:09.346886  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:09.350050  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350407  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:09.350440  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350626  451984 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:09.354884  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:09.367669  451984 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:09.367765  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:09.407793  451984 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:09.407876  451984 ssh_runner.go:195] Run: which lz4
	I0109 00:09:09.412172  451984 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:09.416303  451984 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:09.416331  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:11.408967  451984 crio.go:444] Took 1.996823 seconds to copy over tarball
	I0109 00:09:11.409067  451984 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:10.615452  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.615971  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.615999  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.615922  453115 retry.go:31] will retry after 555.714359ms: waiting for machine to come up
	I0109 00:09:11.173767  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:11.174269  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:11.174301  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:11.174220  453115 retry.go:31] will retry after 843.630815ms: waiting for machine to come up
	I0109 00:09:12.019354  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:12.019896  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:12.019962  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:12.019884  453115 retry.go:31] will retry after 1.083324701s: waiting for machine to come up
	I0109 00:09:13.104954  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:13.105499  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:13.105535  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:13.105442  453115 retry.go:31] will retry after 1.445208328s: waiting for machine to come up
	I0109 00:09:14.552723  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:14.553247  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:14.553278  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:14.553202  453115 retry.go:31] will retry after 1.207345182s: waiting for machine to come up
	I0109 00:09:14.301519  451984 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.892406004s)
	I0109 00:09:14.301567  451984 crio.go:451] Took 2.892564 seconds to extract the tarball
	I0109 00:09:14.301579  451984 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:09:14.344103  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:14.399048  451984 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:09:14.399072  451984 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:09:14.399160  451984 ssh_runner.go:195] Run: crio config
	I0109 00:09:14.459603  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:14.459643  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:14.459693  451984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:14.459752  451984 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-845373 NodeName:embed-certs-845373 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:14.460006  451984 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-845373"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:14.460098  451984 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-845373 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:14.460176  451984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:09:14.469269  451984 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:14.469363  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:14.479156  451984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0109 00:09:14.496058  451984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:09:14.513299  451984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:09:14.530721  451984 ssh_runner.go:195] Run: grep 192.168.50.132	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:14.534849  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:14.546999  451984 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373 for IP: 192.168.50.132
	I0109 00:09:14.547045  451984 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:14.547259  451984 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:14.547310  451984 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:14.547456  451984 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/client.key
	I0109 00:09:14.547531  451984 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key.073edd3d
	I0109 00:09:14.547584  451984 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key
	I0109 00:09:14.547733  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:14.547770  451984 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:14.547778  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:14.547803  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:14.547822  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:14.547851  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:14.547891  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:14.548888  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:14.574032  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:14.599543  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:14.625213  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:09:14.650001  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:14.675008  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:14.699179  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:14.722451  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:14.746559  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:14.769631  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:14.792906  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:14.815748  451984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:14.832389  451984 ssh_runner.go:195] Run: openssl version
	I0109 00:09:14.840602  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:14.856001  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862098  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862187  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.868184  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:14.879131  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:14.890092  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894911  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894969  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.900490  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:14.912056  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:14.923126  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.927937  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.928024  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.933646  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:14.944658  451984 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:14.949507  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:14.956040  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:14.962180  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:14.968224  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:14.974087  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:14.980079  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:14.986029  451984 kubeadm.go:404] StartCluster: {Name:embed-certs-845373 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:14.986148  451984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:14.986202  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:15.027950  451984 cri.go:89] found id: ""
	I0109 00:09:15.028035  451984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:15.039282  451984 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:15.039314  451984 kubeadm.go:636] restartCluster start
	I0109 00:09:15.039430  451984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:15.049695  451984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.050930  451984 kubeconfig.go:92] found "embed-certs-845373" server: "https://192.168.50.132:8443"
	I0109 00:09:15.053805  451984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:15.064953  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.065018  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.078921  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.565496  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.565626  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.578601  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.065133  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.065227  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.077749  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.565317  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.565425  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.578351  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:17.065861  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.065998  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.078781  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.762565  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:15.762982  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:15.763010  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:15.762909  453115 retry.go:31] will retry after 2.319709932s: waiting for machine to come up
	I0109 00:09:18.083780  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:18.084295  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:18.084330  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:18.084224  453115 retry.go:31] will retry after 2.101421106s: waiting for machine to come up
	I0109 00:09:20.188389  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:20.188770  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:20.188804  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:20.188712  453115 retry.go:31] will retry after 2.578747646s: waiting for machine to come up
	I0109 00:09:17.565567  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.565690  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.578496  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.065006  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.065120  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.078249  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.565568  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.565732  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.582691  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.065249  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.065353  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.082433  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.564998  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.565129  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.582026  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.065462  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.065563  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.079586  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.565150  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.565253  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.581576  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.065135  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.065246  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.080231  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.565856  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.566034  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.582980  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.065130  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.065245  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.078868  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.769370  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:22.769835  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:22.769877  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:22.769775  453115 retry.go:31] will retry after 4.446013118s: waiting for machine to come up
	I0109 00:09:22.565774  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.565850  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.581869  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.065381  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.065511  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.078260  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.565069  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.565171  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.577588  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.065102  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.065184  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.077356  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.565990  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.566090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.578416  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.065960  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:25.066090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:25.078618  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.078652  451984 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:25.078665  451984 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:25.078689  451984 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:25.078759  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:25.117213  451984 cri.go:89] found id: ""
	I0109 00:09:25.117304  451984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:25.133313  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:25.142683  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:25.142755  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152228  451984 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152252  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:25.273216  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.323239  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.049977221s)
	I0109 00:09:26.323274  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.531333  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.605976  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.691914  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:26.692006  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.408538  452488 start.go:369] acquired machines lock for "default-k8s-diff-port-834116" in 4m0.587839533s
	I0109 00:09:28.408614  452488 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:28.408627  452488 fix.go:54] fixHost starting: 
	I0109 00:09:28.409094  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:28.409147  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:28.426990  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0109 00:09:28.427467  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:28.428010  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:09:28.428043  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:28.428413  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:28.428726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:28.428887  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:09:28.430477  452488 fix.go:102] recreateIfNeeded on default-k8s-diff-port-834116: state=Stopped err=<nil>
	I0109 00:09:28.430508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	W0109 00:09:28.430658  452488 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:28.432612  452488 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-834116" ...
	I0109 00:09:27.220872  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221372  452237 main.go:141] libmachine: (no-preload-378213) Found IP for machine: 192.168.61.62
	I0109 00:09:27.221401  452237 main.go:141] libmachine: (no-preload-378213) Reserving static IP address...
	I0109 00:09:27.221416  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has current primary IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221769  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.221820  452237 main.go:141] libmachine: (no-preload-378213) DBG | skip adding static IP to network mk-no-preload-378213 - found existing host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"}
	I0109 00:09:27.221842  452237 main.go:141] libmachine: (no-preload-378213) Reserved static IP address: 192.168.61.62
	I0109 00:09:27.221859  452237 main.go:141] libmachine: (no-preload-378213) Waiting for SSH to be available...
	I0109 00:09:27.221877  452237 main.go:141] libmachine: (no-preload-378213) DBG | Getting to WaitForSSH function...
	I0109 00:09:27.224260  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224609  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.224643  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224762  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH client type: external
	I0109 00:09:27.224792  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa (-rw-------)
	I0109 00:09:27.224822  452237 main.go:141] libmachine: (no-preload-378213) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:27.224832  452237 main.go:141] libmachine: (no-preload-378213) DBG | About to run SSH command:
	I0109 00:09:27.224841  452237 main.go:141] libmachine: (no-preload-378213) DBG | exit 0
	I0109 00:09:27.315335  452237 main.go:141] libmachine: (no-preload-378213) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:27.315823  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetConfigRaw
	I0109 00:09:27.316473  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.319014  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319305  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.319339  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319673  452237 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/config.json ...
	I0109 00:09:27.319916  452237 machine.go:88] provisioning docker machine ...
	I0109 00:09:27.319939  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:27.320167  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320354  452237 buildroot.go:166] provisioning hostname "no-preload-378213"
	I0109 00:09:27.320378  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320575  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.322760  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323156  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.323187  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323317  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.323542  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323711  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323869  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.324061  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.324556  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.324577  452237 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-378213 && echo "no-preload-378213" | sudo tee /etc/hostname
	I0109 00:09:27.452901  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-378213
	
	I0109 00:09:27.452957  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.456295  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456636  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.456693  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456919  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.457140  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457343  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457491  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.457671  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.458159  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.458188  452237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-378213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-378213/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-378213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:27.579589  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:27.579626  452237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:27.579658  452237 buildroot.go:174] setting up certificates
	I0109 00:09:27.579674  452237 provision.go:83] configureAuth start
	I0109 00:09:27.579688  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.580039  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.583100  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583557  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.583592  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583759  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.586482  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.586816  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.586862  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.587019  452237 provision.go:138] copyHostCerts
	I0109 00:09:27.587091  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:27.587105  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:27.587162  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:27.587246  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:27.587256  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:27.587276  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:27.587326  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:27.587333  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:27.587350  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:27.587423  452237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.no-preload-378213 san=[192.168.61.62 192.168.61.62 localhost 127.0.0.1 minikube no-preload-378213]
	I0109 00:09:27.642093  452237 provision.go:172] copyRemoteCerts
	I0109 00:09:27.642159  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:27.642186  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.645245  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645702  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.645736  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645959  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.646180  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.646367  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.646552  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:27.740878  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:09:27.770934  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:27.794548  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:27.819155  452237 provision.go:86] duration metric: configureAuth took 239.463059ms
	I0109 00:09:27.819191  452237 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:27.819452  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:09:27.819556  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.822793  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823249  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.823282  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.823666  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823812  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823943  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.824098  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.824547  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.824575  452237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:28.155878  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:28.155939  452237 machine.go:91] provisioned docker machine in 835.996764ms
	I0109 00:09:28.155955  452237 start.go:300] post-start starting for "no-preload-378213" (driver="kvm2")
	I0109 00:09:28.155975  452237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:28.156002  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.156370  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:28.156408  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.159411  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.159831  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.159863  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.160134  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.160347  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.160553  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.160700  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.249092  452237 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:28.253686  452237 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:28.253721  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:28.253812  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:28.253914  452237 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:28.254042  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:28.262550  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:28.286467  452237 start.go:303] post-start completed in 130.492214ms
	I0109 00:09:28.286497  452237 fix.go:56] fixHost completed within 20.373793038s
	I0109 00:09:28.286527  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.289569  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290022  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.290056  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290374  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.290619  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.290815  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.291040  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.291256  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:28.291770  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:28.291788  452237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:28.408354  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758968.353872845
	
	I0109 00:09:28.408384  452237 fix.go:206] guest clock: 1704758968.353872845
	I0109 00:09:28.408392  452237 fix.go:219] Guest: 2024-01-09 00:09:28.353872845 +0000 UTC Remote: 2024-01-09 00:09:28.286503221 +0000 UTC m=+283.122022206 (delta=67.369624ms)
	I0109 00:09:28.408411  452237 fix.go:190] guest clock delta is within tolerance: 67.369624ms
	I0109 00:09:28.408416  452237 start.go:83] releasing machines lock for "no-preload-378213", held for 20.495748993s
	I0109 00:09:28.408448  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.408745  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:28.411951  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412357  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.412395  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412550  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413258  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413495  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413588  452237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:28.413639  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.414067  452237 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:28.414125  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.416878  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417049  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417271  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417292  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417550  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417710  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.417720  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417771  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417896  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.417935  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.418108  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.418105  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.418226  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.533738  452237 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:28.541801  452237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:28.692517  452237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:28.700384  452237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:28.700455  452237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:28.720264  452237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:28.720300  452237 start.go:475] detecting cgroup driver to use...
	I0109 00:09:28.720376  452237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:28.739758  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:28.755682  452237 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:28.755754  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:28.772178  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:28.792261  452237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:28.908562  452237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:29.042390  452237 docker.go:219] disabling docker service ...
	I0109 00:09:29.042528  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:29.055964  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:29.071788  452237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:29.191963  452237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:29.322608  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:29.336149  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:29.357616  452237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:29.357765  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.372357  452237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:29.372436  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.393266  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.405729  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.417114  452237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:29.428259  452237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:29.440397  452237 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:29.440499  452237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:29.454482  452237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:29.467600  452237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:29.590644  452237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:29.786115  452237 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:29.786205  452237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:29.793049  452237 start.go:543] Will wait 60s for crictl version
	I0109 00:09:29.793129  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:29.798630  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:29.847758  452237 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:29.847850  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.905071  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.963992  452237 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:09:29.965790  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:29.969222  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969638  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:29.969687  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969930  452237 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:29.974709  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:29.989617  452237 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:09:29.989667  452237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:30.034776  452237 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:09:30.034804  452237 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.034911  452237 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0109 00:09:30.034925  452237 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.034948  452237 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.035060  452237 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.034904  452237 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.035172  452237 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036679  452237 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036727  452237 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.036737  452237 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.036788  452237 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0109 00:09:30.036814  452237 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.036730  452237 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.036846  452237 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.036678  452237 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.208127  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:27.192095  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:27.692608  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.192176  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.692194  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.192059  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.219995  451984 api_server.go:72] duration metric: took 2.528085009s to wait for apiserver process to appear ...
	I0109 00:09:29.220032  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:09:29.220058  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:28.434238  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Start
	I0109 00:09:28.434453  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring networks are active...
	I0109 00:09:28.435324  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network default is active
	I0109 00:09:28.435804  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network mk-default-k8s-diff-port-834116 is active
	I0109 00:09:28.436322  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Getting domain xml...
	I0109 00:09:28.437072  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Creating domain...
	I0109 00:09:29.958911  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting to get IP...
	I0109 00:09:29.959938  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960820  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960896  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:29.960822  453241 retry.go:31] will retry after 210.498897ms: waiting for machine to come up
	I0109 00:09:30.173307  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173717  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173752  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.173670  453241 retry.go:31] will retry after 342.664675ms: waiting for machine to come up
	I0109 00:09:30.518442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519113  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.519069  453241 retry.go:31] will retry after 411.240969ms: waiting for machine to come up
	I0109 00:09:30.931762  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932152  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.932104  453241 retry.go:31] will retry after 402.965268ms: waiting for machine to come up
	I0109 00:09:31.336957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337426  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337459  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.337393  453241 retry.go:31] will retry after 626.321347ms: waiting for machine to come up
	I0109 00:09:31.965071  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965632  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965665  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.965592  453241 retry.go:31] will retry after 787.166947ms: waiting for machine to come up
	I0109 00:09:30.217603  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0109 00:09:30.234877  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.243097  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.258262  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.273678  452237 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0109 00:09:30.273761  452237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.273826  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.278909  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.285277  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.289552  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.430758  452237 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0109 00:09:30.430813  452237 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.430866  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.430995  452237 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0109 00:09:30.431023  452237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.431061  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456561  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.456591  452237 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0109 00:09:30.456636  452237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.456690  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456722  452237 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0109 00:09:30.456757  452237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.456791  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456911  452237 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0109 00:09:30.456945  452237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.456976  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.482028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.482298  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.482547  452237 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0109 00:09:30.482694  452237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.482754  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.518754  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.518899  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.518966  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.519317  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0109 00:09:30.519422  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.629846  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630082  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630145  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630189  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630022  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0109 00:09:30.630280  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:30.630028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.657819  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657907  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0109 00:09:30.657966  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657824  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0109 00:09:30.658025  452237 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658053  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:30.658084  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0109 00:09:30.658091  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658142  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0109 00:09:30.658173  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0109 00:09:30.714523  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:30.714654  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:32.867027  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.208889866s)
	I0109 00:09:32.867091  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0109 00:09:32.867107  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.209103985s)
	I0109 00:09:32.867122  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:32.867141  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867187  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.209109716s)
	I0109 00:09:32.867221  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0109 00:09:32.867220  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.15254199s)
	I0109 00:09:32.867251  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867190  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:35.150432  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.283143174s)
	I0109 00:09:35.150478  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0109 00:09:35.150509  452237 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:35.150560  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:34.179483  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.179518  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.179533  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.210742  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.210780  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.220940  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.259813  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.259869  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:34.720337  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.733062  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.733105  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.220599  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.228775  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:35.228814  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.720241  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.725882  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:09:35.736706  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:09:35.736745  451984 api_server.go:131] duration metric: took 6.516702561s to wait for apiserver health ...
	I0109 00:09:35.736790  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:35.736811  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:35.739014  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:09:35.740624  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:09:35.776055  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:09:35.814280  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:09:35.832281  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:09:35.832330  451984 system_pods.go:61] "coredns-5dd5756b68-vkd62" [c676d069-cca7-428c-8eec-026ecea14be2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:09:35.832342  451984 system_pods.go:61] "etcd-embed-certs-845373" [92d4616d-126c-4ee9-9475-9d0c790090c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:09:35.832354  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [9663f585-eca1-4f8f-8a93-aea9b4e98c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:09:35.832368  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [41b4ce59-d838-4798-b593-93c7c8573733] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:09:35.832383  451984 system_pods.go:61] "kube-proxy-tbzpb" [132469d5-d267-4869-ad09-c9fba8d0f9d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:09:35.832398  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [336147ec-8318-496b-986d-55845e7dd9a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:09:35.832408  451984 system_pods.go:61] "metrics-server-57f55c9bc5-2p4js" [c37e24f3-c50b-4169-9d0b-48e21072a114] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:09:35.832421  451984 system_pods.go:61] "storage-provisioner" [e558d9f2-6d92-41d6-82bf-194f53ead52c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:09:35.832436  451984 system_pods.go:74] duration metric: took 18.123808ms to wait for pod list to return data ...
	I0109 00:09:35.832451  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:09:35.836031  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:09:35.836180  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:09:35.836225  451984 node_conditions.go:105] duration metric: took 3.766883ms to run NodePressure ...
	I0109 00:09:35.836250  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:36.192967  451984 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198294  451984 kubeadm.go:787] kubelet initialised
	I0109 00:09:36.198327  451984 kubeadm.go:788] duration metric: took 5.32566ms waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198373  451984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:09:36.205198  451984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:36.230481  451984 pod_ready.go:97] node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230560  451984 pod_ready.go:81] duration metric: took 25.328027ms waiting for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	E0109 00:09:36.230576  451984 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230600  451984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:32.754128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779281  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779328  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:32.754425  453241 retry.go:31] will retry after 781.872506ms: waiting for machine to come up
	I0109 00:09:33.538136  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538643  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:33.538562  453241 retry.go:31] will retry after 1.315575893s: waiting for machine to come up
	I0109 00:09:34.856083  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857209  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857287  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:34.857007  453241 retry.go:31] will retry after 1.252692701s: waiting for machine to come up
	I0109 00:09:36.111647  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112092  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112127  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:36.112042  453241 retry.go:31] will retry after 1.549931798s: waiting for machine to come up
	I0109 00:09:37.664325  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664771  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664841  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:37.664729  453241 retry.go:31] will retry after 2.220936863s: waiting for machine to come up
	I0109 00:09:39.585741  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.435146297s)
	I0109 00:09:39.585853  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0109 00:09:39.585890  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:39.585954  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:38.239319  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:40.240459  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:39.886897  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887409  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887446  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:39.887322  453241 retry.go:31] will retry after 3.125817684s: waiting for machine to come up
	I0109 00:09:42.688186  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.102196226s)
	I0109 00:09:42.688238  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0109 00:09:42.688270  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:42.688333  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:44.144243  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.455874893s)
	I0109 00:09:44.144277  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0109 00:09:44.144322  452237 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:44.144396  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:45.193429  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.048998334s)
	I0109 00:09:45.193464  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0109 00:09:45.193501  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:45.193553  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:42.241597  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:44.740359  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:46.239061  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.239098  451984 pod_ready.go:81] duration metric: took 10.008483597s waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.239112  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244571  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.244598  451984 pod_ready.go:81] duration metric: took 5.476365ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244610  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249839  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.249866  451984 pod_ready.go:81] duration metric: took 5.248385ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249891  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254718  451984 pod_ready.go:92] pod "kube-proxy-tbzpb" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.254742  451984 pod_ready.go:81] duration metric: took 4.843779ms waiting for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254752  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:43.016904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017479  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:43.017386  453241 retry.go:31] will retry after 3.976875386s: waiting for machine to come up
	I0109 00:09:46.996452  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996902  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996937  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:46.996855  453241 retry.go:31] will retry after 5.149738116s: waiting for machine to come up
	I0109 00:09:47.750708  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.557124662s)
	I0109 00:09:47.750737  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0109 00:09:47.750767  452237 cache_images.go:123] Successfully loaded all cached images
	I0109 00:09:47.750773  452237 cache_images.go:92] LoadImages completed in 17.715956149s
	I0109 00:09:47.750871  452237 ssh_runner.go:195] Run: crio config
	I0109 00:09:47.811486  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:09:47.811510  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:47.811535  452237 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:47.811560  452237 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.62 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-378213 NodeName:no-preload-378213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:47.811757  452237 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-378213"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:47.811881  452237 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-378213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:47.811954  452237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:09:47.821353  452237 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:47.821426  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:47.830117  452237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0109 00:09:47.847966  452237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:09:47.865130  452237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0109 00:09:47.881920  452237 ssh_runner.go:195] Run: grep 192.168.61.62	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:47.885907  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:47.899472  452237 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213 for IP: 192.168.61.62
	I0109 00:09:47.899519  452237 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:47.899687  452237 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:47.899729  452237 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:47.899792  452237 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/client.key
	I0109 00:09:47.899854  452237 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key.fe752756
	I0109 00:09:47.899891  452237 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key
	I0109 00:09:47.899991  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:47.900022  452237 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:47.900033  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:47.900056  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:47.900084  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:47.900111  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:47.900176  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:47.900831  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:47.926702  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:47.952472  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:47.977143  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:09:48.001909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:48.028506  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:48.054909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:48.079320  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:48.106719  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:48.133440  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:48.157353  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:48.180860  452237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:48.198490  452237 ssh_runner.go:195] Run: openssl version
	I0109 00:09:48.204240  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:48.214015  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218654  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218717  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.224372  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:48.233922  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:48.243425  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248305  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248381  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.254018  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:48.263791  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:48.273568  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278373  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278438  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.284003  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:48.296358  452237 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:48.301336  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:48.307645  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:48.313470  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:48.319349  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:48.325344  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:48.331352  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:48.337159  452237 kubeadm.go:404] StartCluster: {Name:no-preload-378213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:48.337255  452237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:48.337302  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:48.374150  452237 cri.go:89] found id: ""
	I0109 00:09:48.374229  452237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:48.383627  452237 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:48.383649  452237 kubeadm.go:636] restartCluster start
	I0109 00:09:48.383699  452237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:48.392428  452237 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.393515  452237 kubeconfig.go:92] found "no-preload-378213" server: "https://192.168.61.62:8443"
	I0109 00:09:48.395997  452237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:48.404639  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.404708  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.416205  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.904794  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.904896  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.916391  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.404903  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.405006  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.416469  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.905053  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.905224  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.916621  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.262991  451984 pod_ready.go:102] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:50.262235  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:50.262262  451984 pod_ready.go:81] duration metric: took 4.007503301s waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:50.262275  451984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:52.150891  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151383  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Found IP for machine: 192.168.39.73
	I0109 00:09:52.151416  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserving static IP address...
	I0109 00:09:52.151442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has current primary IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.151943  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | skip adding static IP to network mk-default-k8s-diff-port-834116 - found existing host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"}
	I0109 00:09:52.151966  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserved static IP address: 192.168.39.73
	I0109 00:09:52.152005  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for SSH to be available...
	I0109 00:09:52.152039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Getting to WaitForSSH function...
	I0109 00:09:52.154139  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154471  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.154514  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154642  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH client type: external
	I0109 00:09:52.154672  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa (-rw-------)
	I0109 00:09:52.154701  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:52.154719  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | About to run SSH command:
	I0109 00:09:52.154736  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | exit 0
	I0109 00:09:52.247320  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:52.247704  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetConfigRaw
	I0109 00:09:52.248366  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.251047  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251482  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.251511  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251734  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:09:52.251981  452488 machine.go:88] provisioning docker machine ...
	I0109 00:09:52.252003  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:52.252219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252396  452488 buildroot.go:166] provisioning hostname "default-k8s-diff-port-834116"
	I0109 00:09:52.252418  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252612  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.254861  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255244  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.255276  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255485  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.255657  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255956  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.256111  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.256468  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.256485  452488 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-834116 && echo "default-k8s-diff-port-834116" | sudo tee /etc/hostname
	I0109 00:09:52.392092  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-834116
	
	I0109 00:09:52.392128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.394807  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395260  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.395312  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395539  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.395797  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396091  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396289  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.396464  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.396839  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.396863  452488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-834116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-834116/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-834116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:52.527950  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:52.527981  452488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:52.528006  452488 buildroot.go:174] setting up certificates
	I0109 00:09:52.528021  452488 provision.go:83] configureAuth start
	I0109 00:09:52.528033  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.528365  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.531179  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531597  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.531624  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531763  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.534073  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534480  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.534521  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534650  452488 provision.go:138] copyHostCerts
	I0109 00:09:52.534726  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:52.534737  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:52.534796  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:52.534902  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:52.534912  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:52.534933  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:52.535020  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:52.535027  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:52.535042  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:52.535093  452488 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-834116 san=[192.168.39.73 192.168.39.73 localhost 127.0.0.1 minikube default-k8s-diff-port-834116]
	I0109 00:09:53.636158  451943 start.go:369] acquired machines lock for "old-k8s-version-003293" in 1m0.185697203s
	I0109 00:09:53.636214  451943 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:53.636222  451943 fix.go:54] fixHost starting: 
	I0109 00:09:53.636646  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:53.636682  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:53.654194  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0109 00:09:53.654606  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:53.655203  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:09:53.655227  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:53.655659  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:53.655927  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:09:53.656139  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:09:53.657909  451943 fix.go:102] recreateIfNeeded on old-k8s-version-003293: state=Stopped err=<nil>
	I0109 00:09:53.657934  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	W0109 00:09:53.658135  451943 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:53.660261  451943 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003293" ...
	I0109 00:09:52.872029  452488 provision.go:172] copyRemoteCerts
	I0109 00:09:52.872106  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:52.872134  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.874824  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875218  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.875256  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.875726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.875959  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.876122  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:52.970940  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:52.995353  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0109 00:09:53.019846  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:53.048132  452488 provision.go:86] duration metric: configureAuth took 520.096734ms
	I0109 00:09:53.048166  452488 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:53.048357  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:53.048458  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.051336  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051745  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.051781  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051963  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.052200  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052578  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.052753  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.053273  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.053296  452488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:53.371482  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:53.371519  452488 machine.go:91] provisioned docker machine in 1.119521349s
	I0109 00:09:53.371534  452488 start.go:300] post-start starting for "default-k8s-diff-port-834116" (driver="kvm2")
	I0109 00:09:53.371572  452488 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:53.371601  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.371940  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:53.371968  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.374606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.374999  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.375039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.375242  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.375487  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.375668  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.375823  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.469684  452488 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:53.474184  452488 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:53.474226  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:53.474291  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:53.474375  452488 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:53.474510  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:53.484106  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:53.508477  452488 start.go:303] post-start completed in 136.921252ms
	I0109 00:09:53.508516  452488 fix.go:56] fixHost completed within 25.099889324s
	I0109 00:09:53.508540  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.511508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.511954  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.511993  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.512174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.512412  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512605  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512739  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.512966  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.513304  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.513319  452488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:53.635969  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758993.581588382
	
	I0109 00:09:53.635992  452488 fix.go:206] guest clock: 1704758993.581588382
	I0109 00:09:53.636001  452488 fix.go:219] Guest: 2024-01-09 00:09:53.581588382 +0000 UTC Remote: 2024-01-09 00:09:53.508520878 +0000 UTC m=+265.847432935 (delta=73.067504ms)
	I0109 00:09:53.636037  452488 fix.go:190] guest clock delta is within tolerance: 73.067504ms
	I0109 00:09:53.636042  452488 start.go:83] releasing machines lock for "default-k8s-diff-port-834116", held for 25.227459425s
	I0109 00:09:53.636078  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.636408  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:53.639469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.639957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.639990  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.640149  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640967  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.641079  452488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:53.641126  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.641236  452488 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:53.641263  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.643872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644145  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644230  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644258  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644427  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644519  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644552  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644618  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644698  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644784  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.644850  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644945  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.645012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.645188  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.758973  452488 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:53.765494  452488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:53.913457  452488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:53.921317  452488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:53.921409  452488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:53.937393  452488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:53.937422  452488 start.go:475] detecting cgroup driver to use...
	I0109 00:09:53.937501  452488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:53.954986  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:53.967577  452488 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:53.967661  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:53.981370  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:53.994954  452488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:54.113662  452488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:54.257917  452488 docker.go:219] disabling docker service ...
	I0109 00:09:54.258009  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:54.275330  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:54.287545  452488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:54.413696  452488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:54.534759  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:54.548789  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:54.567131  452488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:54.567209  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.578605  452488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:54.578690  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.588764  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.598290  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.608187  452488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:54.619339  452488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:54.627744  452488 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:54.627810  452488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:54.640572  452488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:54.649169  452488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:54.774028  452488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:54.981035  452488 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:54.981123  452488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:54.986812  452488 start.go:543] Will wait 60s for crictl version
	I0109 00:09:54.986874  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:09:54.991067  452488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:55.026881  452488 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:55.026988  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.084315  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.135003  452488 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:50.405359  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.405454  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.417541  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:50.904703  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.904809  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.916106  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.404732  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.404823  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.418697  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.905352  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.905439  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.917655  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.404773  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.404858  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.417345  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.905434  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.905529  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.916604  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.404704  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.404820  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.416990  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.905624  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.905727  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.918455  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.404944  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.405034  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.419015  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.905601  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.905738  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.921252  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.661730  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Start
	I0109 00:09:53.661977  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring networks are active...
	I0109 00:09:53.662718  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network default is active
	I0109 00:09:53.663173  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network mk-old-k8s-version-003293 is active
	I0109 00:09:53.663701  451943 main.go:141] libmachine: (old-k8s-version-003293) Getting domain xml...
	I0109 00:09:53.664456  451943 main.go:141] libmachine: (old-k8s-version-003293) Creating domain...
	I0109 00:09:55.030325  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting to get IP...
	I0109 00:09:55.031241  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.031720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.031800  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.031693  453422 retry.go:31] will retry after 209.915867ms: waiting for machine to come up
	I0109 00:09:55.243218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.243740  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.243792  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.243678  453422 retry.go:31] will retry after 309.964884ms: waiting for machine to come up
	I0109 00:09:55.555468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.556044  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.556075  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.555982  453422 retry.go:31] will retry after 306.870224ms: waiting for machine to come up
	I0109 00:09:55.864558  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.865161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.865199  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.865113  453422 retry.go:31] will retry after 475.599739ms: waiting for machine to come up
	I0109 00:09:52.270751  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:54.271341  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:56.775574  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:55.136380  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:55.139749  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140142  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:55.140174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140387  452488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:55.145715  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:55.159881  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:55.159972  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:55.209715  452488 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:55.209814  452488 ssh_runner.go:195] Run: which lz4
	I0109 00:09:55.214766  452488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:55.219645  452488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:55.219683  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:57.101116  452488 crio.go:444] Took 1.886420 seconds to copy over tarball
	I0109 00:09:57.101207  452488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:55.405633  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.405734  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.420242  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:55.905578  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.905685  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.923018  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.405516  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.405602  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.420028  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.905320  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.905409  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.940464  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.404810  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.404925  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.420965  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.905566  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.905684  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.920601  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:58.404728  452237 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:58.404779  452237 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:58.404821  452237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:58.404906  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:58.450415  452237 cri.go:89] found id: ""
	I0109 00:09:58.450510  452237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:58.469938  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:58.481877  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:58.481963  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494336  452237 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494367  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:58.644325  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.472346  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.715956  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.857573  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.962996  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:59.963097  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:56.342815  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.343422  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.343456  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.343365  453422 retry.go:31] will retry after 512.8445ms: waiting for machine to come up
	I0109 00:09:56.858161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.858689  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.858720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.858631  453422 retry.go:31] will retry after 649.65221ms: waiting for machine to come up
	I0109 00:09:57.509509  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:57.510080  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:57.510121  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:57.510023  453422 retry.go:31] will retry after 1.153518379s: waiting for machine to come up
	I0109 00:09:58.665328  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:58.665946  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:58.665986  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:58.665886  453422 retry.go:31] will retry after 1.392576392s: waiting for machine to come up
	I0109 00:10:00.060701  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:00.061368  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:00.061416  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:00.061263  453422 retry.go:31] will retry after 1.185250663s: waiting for machine to come up
	I0109 00:09:59.270305  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:01.271958  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:00.887146  452488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.785897124s)
	I0109 00:10:00.887183  452488 crio.go:451] Took 3.786033 seconds to extract the tarball
	I0109 00:10:00.887196  452488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:00.940322  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:01.087742  452488 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:10:01.087778  452488 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:10:01.087861  452488 ssh_runner.go:195] Run: crio config
	I0109 00:10:01.154384  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:01.154411  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:01.154432  452488 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:01.154460  452488 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-834116 NodeName:default-k8s-diff-port-834116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:10:01.154664  452488 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-834116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:01.154768  452488 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-834116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0109 00:10:01.154837  452488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:10:01.165075  452488 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:01.165167  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:01.175380  452488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0109 00:10:01.198018  452488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:01.216515  452488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0109 00:10:01.238477  452488 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:01.242706  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:01.256799  452488 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116 for IP: 192.168.39.73
	I0109 00:10:01.256833  452488 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:01.257009  452488 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:01.257084  452488 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:01.257180  452488 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/client.key
	I0109 00:10:01.257272  452488 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key.8b49dc8b
	I0109 00:10:01.257330  452488 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key
	I0109 00:10:01.257473  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:01.257512  452488 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:01.257529  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:01.257582  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:01.257632  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:01.257674  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:01.257737  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:01.258699  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:01.288498  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:10:01.315010  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:01.342657  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:10:01.368423  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:01.394295  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:01.423461  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:01.452044  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:01.478834  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:01.505029  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:01.531765  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:01.557126  452488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:01.575037  452488 ssh_runner.go:195] Run: openssl version
	I0109 00:10:01.580971  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:01.592882  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598205  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598285  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.604293  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:01.615508  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:01.625979  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631195  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631268  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.637322  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:01.649611  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:01.661754  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667033  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667114  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.673312  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:01.687649  452488 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:01.694523  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:01.701260  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:01.709371  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:01.717249  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:01.724104  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:01.730706  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:01.738716  452488 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:01.738846  452488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:01.738935  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:01.789522  452488 cri.go:89] found id: ""
	I0109 00:10:01.789639  452488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:01.802440  452488 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:01.802470  452488 kubeadm.go:636] restartCluster start
	I0109 00:10:01.802530  452488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:01.814839  452488 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:01.816303  452488 kubeconfig.go:92] found "default-k8s-diff-port-834116" server: "https://192.168.39.73:8444"
	I0109 00:10:01.818978  452488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:01.829115  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:01.829200  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:01.841947  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:02.329489  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.329629  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.346716  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:00.463974  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:00.963295  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.463906  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.963508  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.463259  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.964275  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.464037  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.963542  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.998344  452237 api_server.go:72] duration metric: took 4.035357514s to wait for apiserver process to appear ...
	I0109 00:10:03.998383  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:03.998415  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:03.999025  452237 api_server.go:269] stopped: https://192.168.61.62:8443/healthz: Get "https://192.168.61.62:8443/healthz": dial tcp 192.168.61.62:8443: connect: connection refused
	I0109 00:10:04.498619  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:01.248726  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:01.249297  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:01.249334  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:01.249190  453422 retry.go:31] will retry after 2.101995832s: waiting for machine to come up
	I0109 00:10:03.353250  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:03.353837  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:03.353870  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:03.353803  453422 retry.go:31] will retry after 2.338357499s: waiting for machine to come up
	I0109 00:10:05.694257  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:05.694773  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:05.694805  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:05.694753  453422 retry.go:31] will retry after 2.962877462s: waiting for machine to come up
	I0109 00:10:03.772407  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:05.776569  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:02.829349  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.829477  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.845294  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.329917  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.330034  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.345877  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.829787  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.829908  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.845499  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.329869  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.329968  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.345228  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.829841  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.829964  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.841831  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.329392  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.329534  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.344928  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.829388  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.829490  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.845517  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.329745  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.329846  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.344692  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.829201  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.829339  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.844107  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.329562  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.329679  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.341888  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.617974  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.618015  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.618037  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:07.676283  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.676318  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.999237  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.036271  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.036307  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.498881  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.504457  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.504490  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.998535  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:09.009194  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:10:09.017267  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:10:09.017300  452237 api_server.go:131] duration metric: took 5.018909056s to wait for apiserver health ...
	I0109 00:10:09.017311  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:10:09.017319  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:09.019322  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:09.020666  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:09.030282  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:09.049477  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:09.063218  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:09.063264  452237 system_pods.go:61] "coredns-76f75df574-kw4v7" [6a2a3896-7b4c-4912-9e6a-0033564d211b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:09.063277  452237 system_pods.go:61] "etcd-no-preload-378213" [b650412b-fa3a-4490-9b43-caf6ac1cb8b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:09.063294  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [b372f056-7243-416e-905f-ba80a332005a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:09.063307  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8b32fab5-ef2b-4145-8cf8-8ec616a73798] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:09.063317  452237 system_pods.go:61] "kube-proxy-kxjqj" [40d27586-c2e4-407e-ac43-c0dbd851427e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:09.063325  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [2a609b1f-ce89-4e95-b56c-c84702352967] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:09.063343  452237 system_pods.go:61] "metrics-server-57f55c9bc5-th24j" [9f47b0d1-1399-4349-8f99-d85598461c68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:09.063383  452237 system_pods.go:61] "storage-provisioner" [f12f48e3-4e11-47e4-b785-ca9b47cbc0a4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:09.063396  452237 system_pods.go:74] duration metric: took 13.893709ms to wait for pod list to return data ...
	I0109 00:10:09.063407  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:09.067414  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:09.067457  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:09.067474  452237 node_conditions.go:105] duration metric: took 4.056143ms to run NodePressure ...
	I0109 00:10:09.067507  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:09.383666  452237 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389727  452237 kubeadm.go:787] kubelet initialised
	I0109 00:10:09.389749  452237 kubeadm.go:788] duration metric: took 6.05357ms waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389758  452237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:09.397162  452237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:08.658880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:08.659431  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:08.659468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:08.659353  453422 retry.go:31] will retry after 4.088487909s: waiting for machine to come up
	I0109 00:10:08.271546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:10.273183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:07.830081  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.830237  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.846118  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.329537  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.329642  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.345267  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.829229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.829351  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.845147  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.329244  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.329371  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.343552  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.829910  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.829999  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.841589  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.330229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.330316  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.346027  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.830077  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.830193  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.842301  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.329908  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.330029  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.341398  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.829904  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.830007  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.841281  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.841317  452488 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:11.841340  452488 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:11.841350  452488 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:11.841406  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:11.880872  452488 cri.go:89] found id: ""
	I0109 00:10:11.880993  452488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:11.896522  452488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:11.905372  452488 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:11.905452  452488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915053  452488 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915083  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:12.053489  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:11.406042  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:13.406387  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.752603  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753243  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has current primary IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753276  451943 main.go:141] libmachine: (old-k8s-version-003293) Found IP for machine: 192.168.72.81
	I0109 00:10:12.753290  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserving static IP address...
	I0109 00:10:12.753738  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.753770  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | skip adding static IP to network mk-old-k8s-version-003293 - found existing host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"}
	I0109 00:10:12.753790  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserved static IP address: 192.168.72.81
	I0109 00:10:12.753812  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting for SSH to be available...
	I0109 00:10:12.753829  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Getting to WaitForSSH function...
	I0109 00:10:12.756348  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756765  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.756798  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756931  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH client type: external
	I0109 00:10:12.756959  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa (-rw-------)
	I0109 00:10:12.756995  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:10:12.757008  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | About to run SSH command:
	I0109 00:10:12.757025  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | exit 0
	I0109 00:10:12.908563  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | SSH cmd err, output: <nil>: 
	I0109 00:10:12.909330  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetConfigRaw
	I0109 00:10:12.910245  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:12.913338  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.913744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.913778  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.914153  451943 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/config.json ...
	I0109 00:10:12.914422  451943 machine.go:88] provisioning docker machine ...
	I0109 00:10:12.914451  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:12.914678  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.914869  451943 buildroot.go:166] provisioning hostname "old-k8s-version-003293"
	I0109 00:10:12.914895  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.915042  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:12.917551  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.917918  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.917949  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.918083  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:12.918284  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918477  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918637  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:12.918824  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:12.919390  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:12.919409  451943 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003293 && echo "old-k8s-version-003293" | sudo tee /etc/hostname
	I0109 00:10:13.077570  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003293
	
	I0109 00:10:13.077613  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.081190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081575  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.081599  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081874  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.082128  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082377  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082568  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.082783  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.083268  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.083293  451943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003293/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:10:13.235134  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:10:13.235167  451943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:10:13.235216  451943 buildroot.go:174] setting up certificates
	I0109 00:10:13.235236  451943 provision.go:83] configureAuth start
	I0109 00:10:13.235254  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:13.235632  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:13.239282  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.239867  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.239902  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.240253  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.243109  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243516  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.243546  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243730  451943 provision.go:138] copyHostCerts
	I0109 00:10:13.243811  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:10:13.243826  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:10:13.243917  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:10:13.244095  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:10:13.244109  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:10:13.244139  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:10:13.244233  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:10:13.244244  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:10:13.244271  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:10:13.244357  451943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003293 san=[192.168.72.81 192.168.72.81 localhost 127.0.0.1 minikube old-k8s-version-003293]
	I0109 00:10:13.358229  451943 provision.go:172] copyRemoteCerts
	I0109 00:10:13.358298  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:10:13.358329  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.361495  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.361925  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.361961  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.362229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.362512  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.362707  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.362901  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:13.464633  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:10:13.491908  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:10:13.520424  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:10:13.551287  451943 provision.go:86] duration metric: configureAuth took 316.030603ms
	I0109 00:10:13.551322  451943 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:10:13.551588  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:10:13.551689  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.554570  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.554888  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.554941  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.555088  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.555402  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555595  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555803  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.555991  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.556435  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.556461  451943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:10:13.929994  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:10:13.930040  451943 machine.go:91] provisioned docker machine in 1.015597473s
	I0109 00:10:13.930056  451943 start.go:300] post-start starting for "old-k8s-version-003293" (driver="kvm2")
	I0109 00:10:13.930076  451943 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:10:13.930107  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:13.930498  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:10:13.930537  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.933680  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934172  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.934218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934589  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.934794  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.935029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.935189  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.038045  451943 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:10:14.044182  451943 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:10:14.044220  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:10:14.044315  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:10:14.044455  451943 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:10:14.044602  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:10:14.056820  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:14.083704  451943 start.go:303] post-start completed in 153.628012ms
	I0109 00:10:14.083736  451943 fix.go:56] fixHost completed within 20.447514213s
	I0109 00:10:14.083765  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.087190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087732  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.087776  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087968  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.088229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088467  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088630  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.088863  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:14.089367  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:14.089389  451943 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:10:14.224545  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704759014.163550757
	
	I0109 00:10:14.224580  451943 fix.go:206] guest clock: 1704759014.163550757
	I0109 00:10:14.224591  451943 fix.go:219] Guest: 2024-01-09 00:10:14.163550757 +0000 UTC Remote: 2024-01-09 00:10:14.083740733 +0000 UTC m=+363.223126670 (delta=79.810024ms)
	I0109 00:10:14.224620  451943 fix.go:190] guest clock delta is within tolerance: 79.810024ms
	I0109 00:10:14.224627  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 20.588443227s
	I0109 00:10:14.224659  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.224961  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:14.228116  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228565  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.228645  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228870  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229553  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229781  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229882  451943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:10:14.229958  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.230034  451943 ssh_runner.go:195] Run: cat /version.json
	I0109 00:10:14.230062  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.233060  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233305  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233484  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233511  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233691  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.233903  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233926  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233959  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234064  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.234220  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234290  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234400  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.234418  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234557  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.328685  451943 ssh_runner.go:195] Run: systemctl --version
	I0109 00:10:14.359854  451943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:10:14.515121  451943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:10:14.525585  451943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:10:14.525668  451943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:10:14.549678  451943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:10:14.549719  451943 start.go:475] detecting cgroup driver to use...
	I0109 00:10:14.549804  451943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:10:14.569734  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:10:14.587820  451943 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:10:14.587921  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:10:14.601724  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:10:14.615402  451943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:10:14.732774  451943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:10:14.872480  451943 docker.go:219] disabling docker service ...
	I0109 00:10:14.872579  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:10:14.887044  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:10:14.904944  451943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:10:15.043833  451943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:10:15.162992  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:10:15.176677  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:10:15.197594  451943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0109 00:10:15.197674  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.207993  451943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:10:15.208071  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.218230  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.228291  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.238163  451943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:10:15.248394  451943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:10:15.257457  451943 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:10:15.257541  451943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:10:15.271604  451943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:10:15.282409  451943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:10:15.401506  451943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:10:15.586851  451943 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:10:15.586942  451943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:10:15.593734  451943 start.go:543] Will wait 60s for crictl version
	I0109 00:10:15.593798  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:15.598705  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:10:15.642640  451943 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:10:15.642751  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.714964  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.773793  451943 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0109 00:10:15.775287  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:15.778313  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.778769  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:15.778795  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.779046  451943 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:10:15.783496  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:15.795338  451943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0109 00:10:15.795427  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:15.844077  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:15.844162  451943 ssh_runner.go:195] Run: which lz4
	I0109 00:10:15.848502  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:10:15.852893  451943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:10:15.852949  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0109 00:10:12.274183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:14.770967  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:16.781482  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.786247  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.017442  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.128701  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.223775  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:13.223873  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:13.724895  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.224593  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.724375  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.224993  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.724059  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.747019  452488 api_server.go:72] duration metric: took 2.523230788s to wait for apiserver process to appear ...
	I0109 00:10:15.747056  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:15.747083  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.747711  452488 api_server.go:269] stopped: https://192.168.39.73:8444/healthz: Get "https://192.168.39.73:8444/healthz": dial tcp 192.168.39.73:8444: connect: connection refused
	I0109 00:10:16.247411  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.407079  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.407307  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:19.407533  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.632956  451943 crio.go:444] Took 1.784489 seconds to copy over tarball
	I0109 00:10:17.633087  451943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:10:19.999506  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:19.999551  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:19.999569  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.066949  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:20.066982  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:20.247460  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.256943  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.256985  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:20.747576  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.755833  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.755892  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:21.247473  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:21.255476  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:10:21.266074  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:10:21.266115  452488 api_server.go:131] duration metric: took 5.519049271s to wait for apiserver health ...
	I0109 00:10:21.266127  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:21.266136  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:21.401812  452488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:19.272981  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.770765  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.903126  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:21.921050  452488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:21.946628  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:21.959029  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:21.959077  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:21.959089  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:21.959100  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:21.959110  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:21.959125  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:21.959141  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:21.959149  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:21.959165  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:21.959178  452488 system_pods.go:74] duration metric: took 12.524667ms to wait for pod list to return data ...
	I0109 00:10:21.959198  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:21.963572  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:21.963614  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:21.963629  452488 node_conditions.go:105] duration metric: took 4.420685ms to run NodePressure ...
	I0109 00:10:21.963653  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:23.566660  452488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.602978271s)
	I0109 00:10:23.566704  452488 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573882  452488 kubeadm.go:787] kubelet initialised
	I0109 00:10:23.573911  452488 kubeadm.go:788] duration metric: took 7.19484ms waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573923  452488 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:23.590206  452488 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.603347  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603402  452488 pod_ready.go:81] duration metric: took 13.169776ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.603416  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603426  452488 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.614946  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.614986  452488 pod_ready.go:81] duration metric: took 11.548332ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.615003  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.615012  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.628345  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628378  452488 pod_ready.go:81] duration metric: took 13.353873ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.628389  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628396  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.635987  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636023  452488 pod_ready.go:81] duration metric: took 7.619372ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.636043  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636072  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.972993  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973028  452488 pod_ready.go:81] duration metric: took 336.946722ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.973040  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973046  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.371951  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.371991  452488 pod_ready.go:81] duration metric: took 398.932785ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.372016  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.372026  452488 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.775778  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775825  452488 pod_ready.go:81] duration metric: took 403.787436ms waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.775842  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775867  452488 pod_ready.go:38] duration metric: took 1.201917208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:24.775895  452488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:10:24.793136  452488 ops.go:34] apiserver oom_adj: -16
	I0109 00:10:24.793169  452488 kubeadm.go:640] restartCluster took 22.990690796s
	I0109 00:10:24.793182  452488 kubeadm.go:406] StartCluster complete in 23.05448254s
	I0109 00:10:24.793207  452488 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.793302  452488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:10:24.795707  452488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.796107  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:10:24.796368  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:10:24.796346  452488 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:10:24.796413  452488 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796432  452488 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796457  452488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-834116"
	I0109 00:10:24.796466  452488 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.796477  452488 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:10:24.796560  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796982  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.796998  452488 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.797017  452488 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:24.797020  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0109 00:10:24.797025  452488 addons.go:246] addon metrics-server should already be in state true
	I0109 00:10:24.797083  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797296  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.797477  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797513  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.803857  452488 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-834116" context rescaled to 1 replicas
	I0109 00:10:24.803958  452488 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:10:24.806278  452488 out.go:177] * Verifying Kubernetes components...
	I0109 00:10:24.807850  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:10:24.817319  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0109 00:10:24.817600  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0109 00:10:24.817766  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818023  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818247  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818270  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.818697  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.818899  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818913  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0109 00:10:24.818937  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.819412  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.819459  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.823502  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.823611  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.824834  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.824859  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.824880  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.825291  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.826131  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.826160  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.829056  452488 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.829115  452488 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:10:24.829158  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.829610  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.829968  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.839969  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0109 00:10:24.840508  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.841140  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.841167  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.841542  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.841864  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.843844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.846088  452488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:24.844882  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0109 00:10:24.848051  452488 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:24.848069  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:10:24.848093  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.848445  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.849053  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.849074  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.849484  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.849550  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0109 00:10:24.849671  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.851401  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.851914  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.851961  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.853938  452488 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:10:22.516402  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:24.907337  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.059397  451943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.42624365s)
	I0109 00:10:21.059430  451943 crio.go:451] Took 3.426440 seconds to extract the tarball
	I0109 00:10:21.059441  451943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:21.109544  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:21.177321  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:21.177353  451943 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:10:21.177408  451943 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.177455  451943 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.177499  451943 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.177679  451943 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.177728  451943 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.177688  451943 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179256  451943 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179325  451943 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0109 00:10:21.179257  451943 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.179429  451943 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.179551  451943 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.179599  451943 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.179888  451943 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.180077  451943 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.354975  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0109 00:10:21.363097  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.390461  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.393703  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.423416  451943 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0109 00:10:21.423475  451943 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0109 00:10:21.423523  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.433698  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.446038  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.466118  451943 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0109 00:10:21.466213  451943 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.466351  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.499618  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.516687  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.517553  451943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0109 00:10:21.517576  451943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0109 00:10:21.517608  451943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.517642  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0109 00:10:21.517653  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.517609  451943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.517735  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.543109  451943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0109 00:10:21.543170  451943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.543228  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571015  451943 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0109 00:10:21.571069  451943 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.571122  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571130  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.627517  451943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0109 00:10:21.627573  451943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.627623  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.730620  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0109 00:10:21.730693  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.730751  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.730772  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.730775  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.730876  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.730899  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0109 00:10:21.730965  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.861219  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0109 00:10:21.861308  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0109 00:10:21.870996  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0109 00:10:21.871033  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0109 00:10:21.871087  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0109 00:10:21.871117  451943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0109 00:10:21.871136  451943 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.871176  451943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0109 00:10:23.431278  451943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.560066098s)
	I0109 00:10:23.431320  451943 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0109 00:10:23.431403  451943 cache_images.go:92] LoadImages completed in 2.25403413s
	W0109 00:10:23.431502  451943 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0109 00:10:23.431630  451943 ssh_runner.go:195] Run: crio config
	I0109 00:10:23.501412  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:23.501437  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:23.501460  451943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:23.501478  451943 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003293 NodeName:old-k8s-version-003293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0109 00:10:23.501642  451943 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003293"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-003293
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.81:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:23.501740  451943 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003293 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:10:23.501815  451943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0109 00:10:23.515496  451943 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:23.515613  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:23.528701  451943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0109 00:10:23.549023  451943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:23.568686  451943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0109 00:10:23.588702  451943 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:23.593056  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:23.609254  451943 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293 for IP: 192.168.72.81
	I0109 00:10:23.609338  451943 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:23.609556  451943 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:23.609643  451943 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:23.609767  451943 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/client.key
	I0109 00:10:23.609842  451943 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key.289ddd16
	I0109 00:10:23.609908  451943 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key
	I0109 00:10:23.610069  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:23.610137  451943 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:23.610158  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:23.610197  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:23.610232  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:23.610265  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:23.610323  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:23.611274  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:23.637653  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0109 00:10:23.664578  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:23.694133  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:10:23.722658  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:23.750223  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:23.778539  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:23.802865  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:23.829553  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:23.857468  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:23.886744  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:23.913384  451943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:23.931928  451943 ssh_runner.go:195] Run: openssl version
	I0109 00:10:23.938105  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:23.949750  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955870  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955954  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.962486  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:23.975292  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:23.988504  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.993956  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.994025  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:24.000015  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:24.010775  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:24.021665  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026909  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026972  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.032957  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:24.043813  451943 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:24.048745  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:24.055015  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:24.061551  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:24.068075  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:24.075942  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:24.081898  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:24.088900  451943 kubeadm.go:404] StartCluster: {Name:old-k8s-version-003293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:24.089008  451943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:24.089075  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:24.138907  451943 cri.go:89] found id: ""
	I0109 00:10:24.139089  451943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:24.152607  451943 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:24.152636  451943 kubeadm.go:636] restartCluster start
	I0109 00:10:24.152696  451943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:24.166246  451943 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.167660  451943 kubeconfig.go:92] found "old-k8s-version-003293" server: "https://192.168.72.81:8443"
	I0109 00:10:24.171161  451943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:24.183456  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.183533  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.197246  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.684537  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.684670  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.698158  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.184562  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.184662  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.196624  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.684258  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.684379  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.699808  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.852491  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.852608  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.852621  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.855293  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.855444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.855453  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:10:24.855467  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:10:24.855484  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.855664  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.855746  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.855858  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.856036  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.857435  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.857481  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.858678  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859181  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.859219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859402  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.859570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.859724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.859856  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.875791  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0109 00:10:24.876275  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.876817  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.876856  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.877200  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.877454  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.879333  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.879644  452488 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:24.879661  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:10:24.879677  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.882683  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.883208  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883504  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.883694  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.883877  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.884070  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:25.036727  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:25.071034  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:10:25.071059  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:10:25.079722  452488 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:25.079745  452488 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0109 00:10:25.096822  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:25.107155  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:10:25.107187  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:10:25.149550  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:25.149576  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:10:25.202736  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:26.696247  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.659482228s)
	I0109 00:10:26.696317  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696334  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696330  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.599464128s)
	I0109 00:10:26.696379  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696398  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696816  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696856  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696855  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696865  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696874  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696883  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696899  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696908  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696935  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696945  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.697254  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697306  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697406  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697461  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697410  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.712803  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.712835  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.713140  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.713162  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736360  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.533581555s)
	I0109 00:10:26.736408  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.736780  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.736826  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.736841  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736852  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.737154  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.737190  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.737205  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.737215  452488 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:26.739310  452488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:10:23.774928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.270567  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.740691  452488 addons.go:508] enable addons completed in 1.94435105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:10:27.084669  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:27.404032  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.407712  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.184150  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.184272  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.196020  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:26.684603  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.684710  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.699571  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.184212  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.184309  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.196193  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.684572  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.684658  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.697405  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.183918  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.184043  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.197428  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.684565  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.684683  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.698124  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.183601  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.183725  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.195941  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.683554  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.683647  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.695548  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.184015  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.184116  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.196332  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.684533  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.684661  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.697315  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.771203  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.269907  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.584966  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:30.585616  452488 node_ready.go:49] node "default-k8s-diff-port-834116" has status "Ready":"True"
	I0109 00:10:30.585646  452488 node_ready.go:38] duration metric: took 5.505876157s waiting for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:30.585661  452488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:30.593510  452488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602388  452488 pod_ready.go:92] pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.602420  452488 pod_ready.go:81] duration metric: took 8.875538ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602438  452488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608316  452488 pod_ready.go:92] pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.608343  452488 pod_ready.go:81] duration metric: took 5.896652ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608355  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614031  452488 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.614056  452488 pod_ready.go:81] duration metric: took 5.692676ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614068  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619101  452488 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.619120  452488 pod_ready.go:81] duration metric: took 5.045637ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619129  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986089  452488 pod_ready.go:92] pod "kube-proxy-p9dmf" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.986121  452488 pod_ready.go:81] duration metric: took 366.984678ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986135  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385215  452488 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:31.385244  452488 pod_ready.go:81] duration metric: took 399.100168ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385254  452488 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.904561  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.905393  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.183976  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.184088  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:31.683769  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.683876  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.695944  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.184543  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.184631  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.197273  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.683504  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.683613  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.696431  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.183904  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.183981  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.195623  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.684295  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.684408  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.697442  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.184151  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:34.184264  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:34.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.196409  451943 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:34.196451  451943 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:34.196467  451943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:34.196558  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:34.243566  451943 cri.go:89] found id: ""
	I0109 00:10:34.243656  451943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:34.260912  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:34.270763  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:34.270859  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280082  451943 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280114  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:34.411011  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.279804  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.503377  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.616758  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.707051  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:35.707153  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:33.771119  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.271823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.399336  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.893942  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.905685  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:38.408847  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.207669  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:36.708189  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.207300  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.259562  451943 api_server.go:72] duration metric: took 1.552509336s to wait for apiserver process to appear ...
	I0109 00:10:37.259602  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:37.259628  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:38.272478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.272571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:37.894659  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.393328  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.393530  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.260559  451943 api_server.go:269] stopped: https://192.168.72.81:8443/healthz: Get "https://192.168.72.81:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0109 00:10:42.260609  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.136163  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.136216  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.136236  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.196804  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.196846  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.260001  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.270495  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.270549  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:43.759989  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.813746  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.813787  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.260614  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.271111  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:44.271144  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.760496  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.771584  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:10:44.780881  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:10:44.780911  451943 api_server.go:131] duration metric: took 7.521300216s to wait for apiserver health ...
	I0109 00:10:44.780923  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:44.780933  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:44.783223  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:40.906182  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:43.407169  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.784832  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:44.802495  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:44.821665  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:44.832420  451943 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:44.832452  451943 system_pods.go:61] "coredns-5644d7b6d9-5hqlw" [b6d5e87b-e72e-47bb-92b2-afecece262c5] Running
	I0109 00:10:44.832456  451943 system_pods.go:61] "coredns-5644d7b6d9-j4nnt" [d8995b4a-0ebf-406b-9937-09ba09591c78] Running
	I0109 00:10:44.832462  451943 system_pods.go:61] "etcd-old-k8s-version-003293" [8b9f9b32-dfe9-4cfe-856b-3aec43645e1e] Running
	I0109 00:10:44.832467  451943 system_pods.go:61] "kube-apiserver-old-k8s-version-003293" [48f5c692-7501-45ae-a53a-49e330129c36] Running
	I0109 00:10:44.832471  451943 system_pods.go:61] "kube-controller-manager-old-k8s-version-003293" [e458a3e9-ae8b-4ab7-bdc5-61b4321cca4a] Running
	I0109 00:10:44.832475  451943 system_pods.go:61] "kube-proxy-bc4tl" [74020495-07c6-441b-9b46-2f6a103d65eb] Running
	I0109 00:10:44.832478  451943 system_pods.go:61] "kube-scheduler-old-k8s-version-003293" [6a8e330c-f4bb-4bfd-b610-9071077fbb0f] Running
	I0109 00:10:44.832482  451943 system_pods.go:61] "storage-provisioner" [cbfd54c3-1952-4c0f-9272-29e2a8a4d5ed] Running
	I0109 00:10:44.832489  451943 system_pods.go:74] duration metric: took 10.801262ms to wait for pod list to return data ...
	I0109 00:10:44.832498  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:44.836130  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:44.836175  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:44.836196  451943 node_conditions.go:105] duration metric: took 3.685161ms to run NodePressure ...
	I0109 00:10:44.836220  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:45.117528  451943 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:45.121965  451943 retry.go:31] will retry after 324.075641ms: kubelet not initialised
	I0109 00:10:45.451702  451943 retry.go:31] will retry after 510.869227ms: kubelet not initialised
	I0109 00:10:42.770145  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.271625  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.394539  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:46.894669  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.910325  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:48.406435  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.969561  451943 retry.go:31] will retry after 435.571732ms: kubelet not initialised
	I0109 00:10:46.411948  451943 retry.go:31] will retry after 1.046618493s: kubelet not initialised
	I0109 00:10:47.471972  451943 retry.go:31] will retry after 1.328746031s: kubelet not initialised
	I0109 00:10:48.805606  451943 retry.go:31] will retry after 1.964166074s: kubelet not initialised
	I0109 00:10:50.776656  451943 retry.go:31] will retry after 2.966424358s: kubelet not initialised
	I0109 00:10:47.271965  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.773571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.393384  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:51.393857  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:50.905980  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:52.404441  452237 pod_ready.go:92] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.404467  452237 pod_ready.go:81] duration metric: took 43.007278698s waiting for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.404477  452237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409827  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.409851  452237 pod_ready.go:81] duration metric: took 5.368556ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409862  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415211  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.415233  452237 pod_ready.go:81] duration metric: took 5.363915ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415243  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420309  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.420329  452237 pod_ready.go:81] duration metric: took 5.078283ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420337  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425229  452237 pod_ready.go:92] pod "kube-proxy-kxjqj" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.425251  452237 pod_ready.go:81] duration metric: took 4.908776ms waiting for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425260  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.801958  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.801989  452237 pod_ready.go:81] duration metric: took 376.723222ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.802000  452237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:54.811346  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.748552  451943 retry.go:31] will retry after 3.201777002s: kubelet not initialised
	I0109 00:10:52.273938  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:54.771590  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.775438  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.422099  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:55.894657  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:57.310528  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:59.313642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.956459  451943 retry.go:31] will retry after 6.469663917s: kubelet not initialised
	I0109 00:10:59.272417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.272940  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:58.393999  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:00.893766  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.809942  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.309972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:03.432087  451943 retry.go:31] will retry after 13.730562228s: kubelet not initialised
	I0109 00:11:03.771273  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.268462  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:02.894171  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.894858  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:07.393254  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.310613  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.812051  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.270554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:10.272757  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:09.893982  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.894729  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.310615  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:13.311452  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:12.770003  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.770452  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.393106  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:16.394348  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:15.809972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.309870  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:17.168682  451943 retry.go:31] will retry after 14.832819941s: kubelet not initialised
	I0109 00:11:17.271266  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:19.271908  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.771727  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.394025  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:20.808968  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:22.810167  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.773732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:26.269527  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.394213  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.893851  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.310683  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:27.810354  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:29.814175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.271026  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.271149  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.393310  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.393582  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.310474  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.312045  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.007072  451943 kubeadm.go:787] kubelet initialised
	I0109 00:11:32.007097  451943 kubeadm.go:788] duration metric: took 46.889534921s waiting for restarted kubelet to initialise ...
	I0109 00:11:32.007109  451943 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:11:32.012969  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018937  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.018957  451943 pod_ready.go:81] duration metric: took 5.963591ms waiting for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018975  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028039  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.028067  451943 pod_ready.go:81] duration metric: took 9.084525ms waiting for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028078  451943 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032808  451943 pod_ready.go:92] pod "etcd-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.032832  451943 pod_ready.go:81] duration metric: took 4.746043ms waiting for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032843  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037435  451943 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.037466  451943 pod_ready.go:81] duration metric: took 4.610014ms waiting for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037478  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405716  451943 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.405742  451943 pod_ready.go:81] duration metric: took 368.257236ms waiting for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405760  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806721  451943 pod_ready.go:92] pod "kube-proxy-bc4tl" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.806747  451943 pod_ready.go:81] duration metric: took 400.981273ms waiting for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806756  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205810  451943 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:33.205840  451943 pod_ready.go:81] duration metric: took 399.074693ms waiting for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205855  451943 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:35.213679  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.271553  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.773998  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.893079  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:35.393616  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.393839  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:36.809214  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:38.809702  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.714222  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.213748  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.270073  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.270564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.771950  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.894200  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.895632  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.810676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:43.310394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:42.214955  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.713236  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.270745  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.769008  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.395323  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.893378  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:45.811067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.310292  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.713278  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:49.212583  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.769858  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.270380  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.894013  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.896386  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.311125  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:52.809499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:54.811339  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.213641  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.214157  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.711725  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.271867  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.771478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.393541  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.894575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.310953  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:59.809359  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.713429  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.215472  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.270445  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.770718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.393555  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:01.810389  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:04.311994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:02.713532  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.213545  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.270633  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.771349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.392243  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.393601  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:06.809758  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.310090  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.713345  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.713636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.774207  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:10.271536  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.892992  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.894465  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.394064  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.310240  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.311902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.713857  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.714968  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.770737  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.271471  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:14.893031  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.393146  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.312766  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.808902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:16.213122  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:18.215771  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.713269  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.772762  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.274611  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:19.399686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:21.895279  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.315434  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.809703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.813460  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:23.215054  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.216598  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.771192  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.271732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.392768  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:26.393642  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.309913  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.310558  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.713280  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.713388  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.771683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.269862  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:28.892939  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.894280  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:31.310860  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.313161  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.215375  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.713965  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.271111  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.770162  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.393271  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.393849  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.811747  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:38.311158  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.212773  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.712777  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.273180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.274403  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.770772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.893508  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.893834  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.394002  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:40.311402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.809836  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.714285  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.213161  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:43.772982  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.269879  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.893044  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.894333  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:45.310764  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:47.810622  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.213392  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.214029  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.712956  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.273388  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.772779  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:49.393068  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:51.894350  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.314344  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:52.809208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.809757  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.213473  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.213609  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.270014  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.270513  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.392981  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:56.896752  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.310923  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.809897  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.713409  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:00.213074  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.771956  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.772597  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.776736  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.392477  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.393047  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.810055  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.316038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:02.214227  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.714073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.271552  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.274081  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:03.394211  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:05.892722  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.808153  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.809658  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.213252  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:09.214016  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.771514  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.271265  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.893535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.394062  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.811210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.309480  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.713294  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.714070  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.274656  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.770363  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:12.892232  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:14.892967  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.893970  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.309955  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.310537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.312112  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.213649  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:18.712398  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:20.713447  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.770504  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.776344  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.391934  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.393412  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.809067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.811245  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.715248  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.215489  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.270417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:24.276304  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.771255  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.892801  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.395553  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.815479  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.309581  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:27.713470  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:29.713667  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.772564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.270216  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.892655  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.893557  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.310454  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.311950  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.809831  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.714418  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.213103  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:33.270895  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.772159  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.894686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.393366  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.810699  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.315029  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.217502  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:38.713073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.772491  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:40.269651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.894503  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.895994  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.393607  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.808659  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.809657  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.212704  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.713415  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.270157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.769816  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.770516  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.394641  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.895010  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.310425  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.310812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.213445  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.714493  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:49.270269  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.262625  451984 pod_ready.go:81] duration metric: took 4m0.000332739s waiting for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	E0109 00:13:50.262665  451984 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:13:50.262695  451984 pod_ready.go:38] duration metric: took 4m14.064299354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:13:50.262735  451984 kubeadm.go:640] restartCluster took 4m35.223413047s
	W0109 00:13:50.262837  451984 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:13:50.262989  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:13:49.394039  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.893287  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.809875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.311275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.214302  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.215860  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.714407  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.893351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.895250  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.811061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:57.811763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.213089  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.214795  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.393252  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.394330  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.395864  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:03.952243  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.689217944s)
	I0109 00:14:03.952404  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:03.965852  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:14:03.975784  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:14:03.984599  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:14:03.984649  451984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:14:04.041116  451984 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:14:04.041179  451984 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:14:04.213643  451984 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:14:04.213797  451984 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:14:04.213932  451984 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:14:04.470597  451984 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:14:00.312213  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.813799  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.816592  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.472836  451984 out.go:204]   - Generating certificates and keys ...
	I0109 00:14:04.473031  451984 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:14:04.473115  451984 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:14:04.473210  451984 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:14:04.473272  451984 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:14:04.473376  451984 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:14:04.473804  451984 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:14:04.474373  451984 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:14:04.474832  451984 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:14:04.475386  451984 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:14:04.475875  451984 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:14:04.476290  451984 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:14:04.476378  451984 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:14:04.599856  451984 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:14:04.905946  451984 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:14:05.274703  451984 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:14:05.463087  451984 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:14:05.464020  451984 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:14:05.468993  451984 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:14:02.215257  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.714764  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:05.471038  451984 out.go:204]   - Booting up control plane ...
	I0109 00:14:05.471146  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:14:05.471245  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:14:05.471342  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:14:05.488208  451984 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:14:05.489177  451984 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:14:05.489282  451984 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:14:05.629700  451984 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:14:04.895593  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.396575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.310589  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.809734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.212902  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.214384  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.895351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:12.397437  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.633863  451984 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004133 seconds
	I0109 00:14:13.634067  451984 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:14:13.657224  451984 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:14:14.196593  451984 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:14:14.196798  451984 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-845373 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:14:14.715124  451984 kubeadm.go:322] [bootstrap-token] Using token: 0z1u86.ex8qfq3o12xtqu87
	I0109 00:14:14.716600  451984 out.go:204]   - Configuring RBAC rules ...
	I0109 00:14:14.716727  451984 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:14:14.724791  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:14:14.734361  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:14:14.742345  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:14:14.749616  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:14:14.753942  451984 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:14:14.774188  451984 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:14:15.042710  451984 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:14:15.131751  451984 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:14:15.132745  451984 kubeadm.go:322] 
	I0109 00:14:15.132804  451984 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:14:15.132810  451984 kubeadm.go:322] 
	I0109 00:14:15.132872  451984 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:14:15.132879  451984 kubeadm.go:322] 
	I0109 00:14:15.132898  451984 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:14:15.132959  451984 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:14:15.133067  451984 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:14:15.133094  451984 kubeadm.go:322] 
	I0109 00:14:15.133160  451984 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:14:15.133173  451984 kubeadm.go:322] 
	I0109 00:14:15.133229  451984 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:14:15.133241  451984 kubeadm.go:322] 
	I0109 00:14:15.133313  451984 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:14:15.133412  451984 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:14:15.133510  451984 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:14:15.133524  451984 kubeadm.go:322] 
	I0109 00:14:15.133644  451984 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:14:15.133761  451984 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:14:15.133777  451984 kubeadm.go:322] 
	I0109 00:14:15.133882  451984 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134003  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:14:15.134030  451984 kubeadm.go:322] 	--control-plane 
	I0109 00:14:15.134037  451984 kubeadm.go:322] 
	I0109 00:14:15.134137  451984 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:14:15.134145  451984 kubeadm.go:322] 
	I0109 00:14:15.134240  451984 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134415  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:14:15.135483  451984 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:14:15.135524  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:14:15.135536  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:14:15.137331  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:14:11.810358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.813252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:11.214971  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.713322  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.714895  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.138794  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:14:15.164722  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:14:15.236472  451984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:14:15.236536  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.236558  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=embed-certs-845373 minikube.k8s.io/updated_at=2024_01_09T00_14_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.353564  451984 ops.go:34] apiserver oom_adj: -16
	I0109 00:14:15.675801  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.176590  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.676619  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:17.176120  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:14.893438  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.896780  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.311939  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.312023  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.213002  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.214958  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:17.676614  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.176469  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.676367  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.176646  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.676613  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.176615  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.676641  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.176075  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.676489  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:22.176784  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.395936  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:21.892353  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.810687  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.810879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.713569  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:25.213852  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.676054  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.176662  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.676911  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.175927  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.676685  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.176625  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.676281  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.176650  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.675943  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.176834  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.894745  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:26.394535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.676594  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.846642  451984 kubeadm.go:1088] duration metric: took 12.610179243s to wait for elevateKubeSystemPrivileges.
	I0109 00:14:27.846694  451984 kubeadm.go:406] StartCluster complete in 5m12.860674926s
	I0109 00:14:27.846775  451984 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.846922  451984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:14:27.849568  451984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.849886  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:14:27.850039  451984 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:14:27.850143  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:14:27.850168  451984 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845373"
	I0109 00:14:27.850185  451984 addons.go:69] Setting metrics-server=true in profile "embed-certs-845373"
	I0109 00:14:27.850196  451984 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-845373"
	W0109 00:14:27.850206  451984 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:14:27.850209  451984 addons.go:237] Setting addon metrics-server=true in "embed-certs-845373"
	W0109 00:14:27.850226  451984 addons.go:246] addon metrics-server should already be in state true
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850780  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850804  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850886  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850916  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850174  451984 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845373"
	I0109 00:14:27.850983  451984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845373"
	I0109 00:14:27.851436  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.851473  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.869118  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0109 00:14:27.869634  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.870272  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.870301  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.870793  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.870883  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0109 00:14:27.871047  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0109 00:14:27.871320  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871380  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871694  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.871740  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.871880  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871910  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.871917  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871934  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.872311  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872318  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872472  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.872864  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.872907  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.875833  451984 addons.go:237] Setting addon default-storageclass=true in "embed-certs-845373"
	W0109 00:14:27.875851  451984 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:14:27.875874  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.876143  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.876172  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0109 00:14:27.892642  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0109 00:14:27.893165  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893218  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893382  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893725  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893751  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.893889  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893906  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894287  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894344  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894351  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.894366  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894531  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.894905  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894920  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.894955  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.895325  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.897315  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.897565  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.899343  451984 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:14:27.901058  451984 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:14:27.903097  451984 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:27.903113  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:14:27.903129  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.901085  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:14:27.903182  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:14:27.903190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.907703  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908100  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908474  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908505  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908744  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908765  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908869  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.908924  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.909079  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909118  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909274  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909303  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909444  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.909660  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.913404  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0109 00:14:27.913992  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.914388  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.914409  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.914831  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.915055  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.916650  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.916872  451984 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:27.916891  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:14:27.916911  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.919557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.919945  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.919962  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.920188  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.920346  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.920520  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.920627  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:28.169436  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:14:28.180527  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:28.194004  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:14:28.194025  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:14:28.216619  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:28.258292  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:14:28.258321  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:14:28.320624  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:28.320652  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:14:28.355471  451984 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-845373" context rescaled to 1 replicas
	I0109 00:14:28.355514  451984 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:14:28.357573  451984 out.go:177] * Verifying Kubernetes components...
	I0109 00:14:25.309676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.312462  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:29.810262  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:28.359075  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:28.379542  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:30.061115  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.891626144s)
	I0109 00:14:30.061149  451984 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0109 00:14:30.452861  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.236197297s)
	I0109 00:14:30.452929  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.452943  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.452943  451984 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.09383281s)
	I0109 00:14:30.453122  451984 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.453131  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.272573904s)
	I0109 00:14:30.453293  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453306  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453320  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453311  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453332  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453342  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453674  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453693  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453700  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453708  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453740  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453752  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453764  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453784  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.454074  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.454093  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.454107  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.457209  451984 node_ready.go:49] node "embed-certs-845373" has status "Ready":"True"
	I0109 00:14:30.457229  451984 node_ready.go:38] duration metric: took 4.077361ms waiting for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.457238  451984 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:30.488244  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.488275  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.488609  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.488634  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.488660  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.489887  451984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:30.508615  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.129028413s)
	I0109 00:14:30.508663  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.508677  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.508966  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.509058  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509152  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509175  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.509190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.509535  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509564  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509578  451984 addons.go:473] Verifying addon metrics-server=true in "embed-certs-845373"
	I0109 00:14:30.509582  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.511636  451984 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:14:27.714663  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.213049  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.513246  451984 addons.go:508] enable addons completed in 2.663216413s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:14:31.999091  451984 pod_ready.go:92] pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:31.999122  451984 pod_ready.go:81] duration metric: took 1.509214799s waiting for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:31.999131  451984 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005047  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.005077  451984 pod_ready.go:81] duration metric: took 5.937291ms waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005091  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011823  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.011853  451984 pod_ready.go:81] duration metric: took 6.752071ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011866  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017760  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.017782  451984 pod_ready.go:81] duration metric: took 5.908986ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017792  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058063  451984 pod_ready.go:92] pod "kube-proxy-nxtn2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.058094  451984 pod_ready.go:81] duration metric: took 40.295825ms waiting for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058104  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:28.397781  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.894153  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:31.394151  452488 pod_ready.go:81] duration metric: took 4m0.008881128s waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:31.394180  452488 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:14:31.394191  452488 pod_ready.go:38] duration metric: took 4m0.808517944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:31.394210  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:14:31.394307  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:31.394397  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:31.457897  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:31.457929  452488 cri.go:89] found id: ""
	I0109 00:14:31.457941  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:31.458002  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.463534  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:31.463632  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:31.524249  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:31.524284  452488 cri.go:89] found id: ""
	I0109 00:14:31.524296  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:31.524363  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.529188  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:31.529260  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:31.583505  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:31.583543  452488 cri.go:89] found id: ""
	I0109 00:14:31.583554  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:31.583618  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.589373  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:31.589466  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:31.639895  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:31.639931  452488 cri.go:89] found id: ""
	I0109 00:14:31.639942  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:31.640016  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.644881  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:31.644952  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:31.686002  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.686031  452488 cri.go:89] found id: ""
	I0109 00:14:31.686047  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:31.686114  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.691664  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:31.691754  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:31.745729  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:31.745757  452488 cri.go:89] found id: ""
	I0109 00:14:31.745766  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:31.745829  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.751116  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:31.751192  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:31.794856  452488 cri.go:89] found id: ""
	I0109 00:14:31.794890  452488 logs.go:284] 0 containers: []
	W0109 00:14:31.794901  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:31.794909  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:31.794976  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:31.840973  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:31.840999  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:31.841006  452488 cri.go:89] found id: ""
	I0109 00:14:31.841014  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:31.841084  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.845852  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.850824  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:31.850851  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:31.914344  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:31.914404  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.958899  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:31.958934  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:32.021319  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:32.021353  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:32.074995  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:32.075034  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:32.089535  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:32.089572  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:32.244418  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:32.244460  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:32.288116  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:32.288161  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:32.332939  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:32.332980  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:32.378455  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:32.378487  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:32.437376  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:32.437421  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:31.813208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.311338  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.215522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.712223  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.460309  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.460343  451984 pod_ready.go:81] duration metric: took 402.230769ms waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.460358  451984 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:34.470103  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.470854  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.911300  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:32.911345  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:32.959902  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:32.959942  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.500402  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:14:35.516569  452488 api_server.go:72] duration metric: took 4m10.712558057s to wait for apiserver process to appear ...
	I0109 00:14:35.516600  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:14:35.516640  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:35.516690  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:35.559395  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:35.559421  452488 cri.go:89] found id: ""
	I0109 00:14:35.559429  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:35.559497  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.564381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:35.564468  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:35.604963  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:35.604991  452488 cri.go:89] found id: ""
	I0109 00:14:35.605004  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:35.605074  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.610352  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:35.610412  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:35.655316  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.655353  452488 cri.go:89] found id: ""
	I0109 00:14:35.655381  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:35.655471  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.660932  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:35.661015  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:35.702201  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:35.702228  452488 cri.go:89] found id: ""
	I0109 00:14:35.702237  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:35.702297  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.707544  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:35.707615  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:35.755445  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:35.755478  452488 cri.go:89] found id: ""
	I0109 00:14:35.755489  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:35.755555  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.760393  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:35.760470  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:35.813641  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:35.813672  452488 cri.go:89] found id: ""
	I0109 00:14:35.813682  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:35.813749  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.819342  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:35.819495  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:35.861693  452488 cri.go:89] found id: ""
	I0109 00:14:35.861723  452488 logs.go:284] 0 containers: []
	W0109 00:14:35.861732  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:35.861740  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:35.861807  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:35.900886  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:35.900931  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:35.900937  452488 cri.go:89] found id: ""
	I0109 00:14:35.900945  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:35.901005  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.905463  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.910271  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:35.910300  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:36.056761  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:36.056798  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:36.096707  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:36.096739  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:36.555891  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:36.555936  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:36.573167  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:36.573196  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:36.622139  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:36.622169  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:36.680395  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:36.680435  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:36.740350  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:36.740389  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:36.779409  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:36.779443  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:36.837425  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:36.837474  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:36.892724  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:36.892763  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:36.939944  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:36.939979  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:36.999567  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:36.999612  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:36.810729  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.810924  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.713630  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.213516  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.970746  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.468803  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.546015  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:14:39.551932  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:14:39.553444  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:14:39.553469  452488 api_server.go:131] duration metric: took 4.036861283s to wait for apiserver health ...
	I0109 00:14:39.553480  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:14:39.553512  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:39.553592  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:39.597338  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:39.597368  452488 cri.go:89] found id: ""
	I0109 00:14:39.597381  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:39.597450  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.602381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:39.602473  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:39.643738  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:39.643776  452488 cri.go:89] found id: ""
	I0109 00:14:39.643787  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:39.643854  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.649021  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:39.649096  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:39.692903  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:39.692926  452488 cri.go:89] found id: ""
	I0109 00:14:39.692934  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:39.692992  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.697806  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:39.697882  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:39.746679  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:39.746706  452488 cri.go:89] found id: ""
	I0109 00:14:39.746716  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:39.746765  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.752396  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:39.752459  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:39.800438  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:39.800461  452488 cri.go:89] found id: ""
	I0109 00:14:39.800470  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:39.800535  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.805644  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:39.805737  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:39.847341  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:39.847387  452488 cri.go:89] found id: ""
	I0109 00:14:39.847398  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:39.847465  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.851972  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:39.852053  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:39.899183  452488 cri.go:89] found id: ""
	I0109 00:14:39.899219  452488 logs.go:284] 0 containers: []
	W0109 00:14:39.899231  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:39.899239  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:39.899309  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:39.958353  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:39.958395  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:39.958400  452488 cri.go:89] found id: ""
	I0109 00:14:39.958409  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:39.958469  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.963264  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.968827  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:39.968858  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:40.015655  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:40.015685  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:40.161910  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:40.161944  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:40.200197  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:40.200233  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:40.244075  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:40.244119  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:40.655095  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:40.655160  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:40.711957  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:40.712004  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:40.765456  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:40.765503  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:40.824273  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:40.824320  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:40.887213  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:40.887252  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:40.925809  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:40.925842  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:40.967599  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:40.967635  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:41.021163  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:41.021219  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:43.543901  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:14:43.543933  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.543938  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.543943  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.543947  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.543951  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.543955  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.543962  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.543966  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.543974  452488 system_pods.go:74] duration metric: took 3.990487712s to wait for pod list to return data ...
	I0109 00:14:43.543982  452488 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:14:43.547032  452488 default_sa.go:45] found service account: "default"
	I0109 00:14:43.547063  452488 default_sa.go:55] duration metric: took 3.07377ms for default service account to be created ...
	I0109 00:14:43.547075  452488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:14:43.554265  452488 system_pods.go:86] 8 kube-system pods found
	I0109 00:14:43.554305  452488 system_pods.go:89] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.554314  452488 system_pods.go:89] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.554322  452488 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.554329  452488 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.554336  452488 system_pods.go:89] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.554343  452488 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.554356  452488 system_pods.go:89] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.554397  452488 system_pods.go:89] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.554420  452488 system_pods.go:126] duration metric: took 7.336546ms to wait for k8s-apps to be running ...
	I0109 00:14:43.554431  452488 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:14:43.554494  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:43.570839  452488 system_svc.go:56] duration metric: took 16.394034ms WaitForService to wait for kubelet.
	I0109 00:14:43.570874  452488 kubeadm.go:581] duration metric: took 4m18.766870325s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:14:43.570904  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:14:43.575087  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:14:43.575115  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:14:43.575127  452488 node_conditions.go:105] duration metric: took 4.218446ms to run NodePressure ...
	I0109 00:14:43.575139  452488 start.go:228] waiting for startup goroutines ...
	I0109 00:14:43.575145  452488 start.go:233] waiting for cluster config update ...
	I0109 00:14:43.575154  452488 start.go:242] writing updated cluster config ...
	I0109 00:14:43.575452  452488 ssh_runner.go:195] Run: rm -f paused
	I0109 00:14:43.636407  452488 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:14:43.638597  452488 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-834116" cluster and "default" namespace by default
	I0109 00:14:40.814426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.310989  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.214186  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.714118  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.968087  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.968943  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.809788  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:47.810189  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:46.213897  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.714327  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.716636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.472384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.473405  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.310188  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.311048  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.803108  452237 pod_ready.go:81] duration metric: took 4m0.001087466s waiting for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:52.803148  452237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:14:52.803179  452237 pod_ready.go:38] duration metric: took 4m43.413410939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:52.803217  452237 kubeadm.go:640] restartCluster took 5m4.419560589s
	W0109 00:14:52.803342  452237 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:14:52.803433  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:14:53.213308  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.215229  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.972718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.470546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.714170  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:00.213742  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.968558  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:59.969971  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:01.970573  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:02.713539  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:05.213339  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:04.470909  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:06.976278  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:07.153986  452237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.350512063s)
	I0109 00:15:07.154091  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:07.169206  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:07.180120  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:07.190689  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:07.190746  452237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:15:07.249723  452237 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0109 00:15:07.249803  452237 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:07.413454  452237 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:07.413648  452237 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:07.413809  452237 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:07.666677  452237 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:07.668620  452237 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:07.668736  452237 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:07.668869  452237 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:07.669044  452237 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:07.669122  452237 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:07.669206  452237 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:07.669265  452237 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:07.669338  452237 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:07.669409  452237 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:07.669493  452237 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:07.669587  452237 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:07.669632  452237 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:07.669698  452237 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:07.892774  452237 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:08.387341  452237 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0109 00:15:08.697850  452237 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:09.110380  452237 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:09.182970  452237 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:09.183625  452237 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:09.186350  452237 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:09.188402  452237 out.go:204]   - Booting up control plane ...
	I0109 00:15:09.188494  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:09.188620  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:09.190877  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:09.210069  452237 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:09.213806  452237 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:09.214168  452237 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:15:09.348180  452237 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:07.713522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:10.212932  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:09.468413  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:11.472366  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:12.214158  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:14.713831  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:13.968332  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:15.970174  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:17.853084  452237 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502974 seconds
	I0109 00:15:17.871025  452237 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:17.897430  452237 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:18.444483  452237 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:18.444785  452237 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-378213 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:15:18.959611  452237 kubeadm.go:322] [bootstrap-token] Using token: dhjf8u.939ptni0q22ypfw8
	I0109 00:15:18.961445  452237 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:18.961621  452237 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:18.976769  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:15:18.986315  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:18.991512  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:18.996317  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:19.001219  452237 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:19.018739  452237 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:15:19.300703  452237 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:19.384320  452237 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:19.385524  452237 kubeadm.go:322] 
	I0109 00:15:19.385609  452237 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:19.385646  452237 kubeadm.go:322] 
	I0109 00:15:19.385746  452237 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:19.385759  452237 kubeadm.go:322] 
	I0109 00:15:19.385780  452237 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:19.385851  452237 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:19.385894  452237 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:19.385902  452237 kubeadm.go:322] 
	I0109 00:15:19.385976  452237 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:15:19.385984  452237 kubeadm.go:322] 
	I0109 00:15:19.386052  452237 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:15:19.386063  452237 kubeadm.go:322] 
	I0109 00:15:19.386140  452237 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:19.386255  452237 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:19.386338  452237 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:19.386348  452237 kubeadm.go:322] 
	I0109 00:15:19.386445  452237 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:15:19.386563  452237 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:19.386588  452237 kubeadm.go:322] 
	I0109 00:15:19.386704  452237 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.386865  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:19.386893  452237 kubeadm.go:322] 	--control-plane 
	I0109 00:15:19.386900  452237 kubeadm.go:322] 
	I0109 00:15:19.387013  452237 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:19.387023  452237 kubeadm.go:322] 
	I0109 00:15:19.387156  452237 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.387306  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:19.388274  452237 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:19.388386  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:15:19.388404  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:19.390641  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:19.392729  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:19.420375  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:19.480953  452237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:19.481036  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.481070  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=no-preload-378213 minikube.k8s.io/updated_at=2024_01_09T00_15_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.529444  452237 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:19.828947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:17.214395  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:19.714562  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:18.467657  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.469306  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.329278  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:20.829730  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.329756  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.829370  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.329549  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.829161  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.329937  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.829891  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.329077  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.829276  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.715433  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.214554  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:22.469602  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.968838  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:25.329025  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:25.829279  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.329947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.829794  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.329030  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.829080  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.329613  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.829372  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.329826  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.829063  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.712393  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:28.715010  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:30.329991  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:30.829320  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.329115  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.423331  452237 kubeadm.go:1088] duration metric: took 11.942366757s to wait for elevateKubeSystemPrivileges.
	I0109 00:15:31.423377  452237 kubeadm.go:406] StartCluster complete in 5m43.086225729s
	I0109 00:15:31.423405  452237 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.423510  452237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:15:31.425917  452237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.426178  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:15:31.426284  452237 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:15:31.426369  452237 addons.go:69] Setting storage-provisioner=true in profile "no-preload-378213"
	I0109 00:15:31.426384  452237 addons.go:69] Setting default-storageclass=true in profile "no-preload-378213"
	I0109 00:15:31.426397  452237 addons.go:237] Setting addon storage-provisioner=true in "no-preload-378213"
	W0109 00:15:31.426409  452237 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:15:31.426432  452237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-378213"
	I0109 00:15:31.426447  452237 addons.go:69] Setting metrics-server=true in profile "no-preload-378213"
	I0109 00:15:31.426476  452237 addons.go:237] Setting addon metrics-server=true in "no-preload-378213"
	W0109 00:15:31.426484  452237 addons.go:246] addon metrics-server should already be in state true
	I0109 00:15:31.426485  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426540  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426434  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:15:31.426891  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426918  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426927  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426931  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.446291  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0109 00:15:31.446423  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0109 00:15:31.446819  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0109 00:15:31.447018  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.447639  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.447724  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447854  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.448095  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448259  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448288  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448354  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.448439  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448465  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448921  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448997  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.449699  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449744  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.449757  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449785  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.452784  452237 addons.go:237] Setting addon default-storageclass=true in "no-preload-378213"
	W0109 00:15:31.452809  452237 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:15:31.452841  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.454376  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.454416  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.467638  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0109 00:15:31.468325  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.468901  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.468921  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.469339  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.469563  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.471409  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.473329  452237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:15:31.474680  452237 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.474693  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:15:31.474706  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.473604  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0109 00:15:31.474062  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0109 00:15:31.475095  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475399  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.475627  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.475979  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.476163  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.477959  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.479656  452237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:15:31.478629  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.479280  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.479557  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.480974  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.481058  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:15:31.481066  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:15:31.481079  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.481110  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.481128  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.481308  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.481878  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.482384  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.483085  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.483645  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.483668  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.484708  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485095  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.485117  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485318  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.487608  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.487807  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.487999  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.499347  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0109 00:15:31.499913  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.500547  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.500570  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.500917  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.501145  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.503016  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.503296  452237 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.503310  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:15:31.503325  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.506091  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506397  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.506455  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506652  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.506831  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.506978  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.507091  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.624782  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:15:31.642826  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.663296  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.710300  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:15:31.710330  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:15:31.787478  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:15:31.787517  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:15:31.871349  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:31.871407  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:15:31.968192  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:32.072474  452237 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-378213" context rescaled to 1 replicas
	I0109 00:15:32.072532  452237 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:15:32.074625  452237 out.go:177] * Verifying Kubernetes components...
	I0109 00:15:27.468923  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:29.971742  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:32.075944  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:32.439632  452237 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0109 00:15:32.439722  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.439751  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440089  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440193  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.440209  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.440219  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440166  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440559  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440571  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440580  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.497313  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.497346  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.497717  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.497747  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901192  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.237846158s)
	I0109 00:15:32.901262  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901276  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901654  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.901703  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901719  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901730  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901662  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902029  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902069  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.902079  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030220  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.061947007s)
	I0109 00:15:33.030237  452237 node_ready.go:35] waiting up to 6m0s for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.030290  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030308  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.030694  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.030714  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030725  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030734  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.031003  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.031022  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.031034  452237 addons.go:473] Verifying addon metrics-server=true in "no-preload-378213"
	I0109 00:15:33.032849  452237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:15:33.034106  452237 addons.go:508] enable addons completed in 1.60782305s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:15:33.044548  452237 node_ready.go:49] node "no-preload-378213" has status "Ready":"True"
	I0109 00:15:33.044577  452237 node_ready.go:38] duration metric: took 14.31045ms waiting for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.044592  452237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.060577  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:34.066536  452237 pod_ready.go:97] error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066570  452237 pod_ready.go:81] duration metric: took 1.005962139s waiting for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:34.066584  452237 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066594  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:31.213050  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:33.206836  451943 pod_ready.go:81] duration metric: took 4m0.000952779s waiting for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:33.206864  451943 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:15:33.206884  451943 pod_ready.go:38] duration metric: took 4m1.199765303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.206916  451943 kubeadm.go:640] restartCluster took 5m9.054273444s
	W0109 00:15:33.206995  451943 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:15:33.207029  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:15:32.469904  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:34.969702  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:36.074768  452237 pod_ready.go:92] pod "coredns-76f75df574-ztvgr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.074793  452237 pod_ready.go:81] duration metric: took 2.008191718s waiting for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.074803  452237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080586  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.080610  452237 pod_ready.go:81] duration metric: took 5.80009ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080623  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.085972  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.085995  452237 pod_ready.go:81] duration metric: took 5.365045ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.086004  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091275  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.091295  452237 pod_ready.go:81] duration metric: took 5.284302ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091306  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095919  452237 pod_ready.go:92] pod "kube-proxy-4vnf5" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.095938  452237 pod_ready.go:81] duration metric: took 4.624685ms waiting for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095949  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471021  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.471051  452237 pod_ready.go:81] duration metric: took 375.093915ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471066  452237 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:38.478891  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.932714  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.725641704s)
	I0109 00:15:39.932824  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:39.949655  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:39.967317  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:39.983553  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:39.983602  451943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0109 00:15:40.196509  451943 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:37.468440  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.468561  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:41.468728  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:40.481038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:42.979928  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:43.468928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.968791  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.479525  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.981785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:49.988192  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.970158  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:50.469209  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.798385  451943 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0109 00:15:53.798458  451943 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:53.798557  451943 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:53.798719  451943 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:53.798863  451943 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:53.799001  451943 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:53.799122  451943 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:53.799199  451943 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0109 00:15:53.799296  451943 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:53.800918  451943 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:53.801030  451943 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:53.801108  451943 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:53.801199  451943 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:53.801284  451943 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:53.801342  451943 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:53.801386  451943 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:53.801441  451943 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:53.801491  451943 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:53.801563  451943 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:53.801654  451943 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:53.801710  451943 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:53.801776  451943 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:53.801841  451943 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:53.801885  451943 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:53.801935  451943 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:53.802013  451943 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:53.802097  451943 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:53.803572  451943 out.go:204]   - Booting up control plane ...
	I0109 00:15:53.803682  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:53.803757  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:53.803811  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:53.803932  451943 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:53.804150  451943 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:53.804251  451943 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506007 seconds
	I0109 00:15:53.804388  451943 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:53.804541  451943 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:53.804628  451943 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:53.804832  451943 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-003293 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0109 00:15:53.804900  451943 kubeadm.go:322] [bootstrap-token] Using token: 4iop3a.ft6ghwlgcg45v9u4
	I0109 00:15:53.806501  451943 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:53.806592  451943 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:53.806724  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:53.806832  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:53.806959  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:53.807033  451943 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:53.807071  451943 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:53.807109  451943 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:53.807115  451943 kubeadm.go:322] 
	I0109 00:15:53.807175  451943 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:53.807199  451943 kubeadm.go:322] 
	I0109 00:15:53.807319  451943 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:53.807328  451943 kubeadm.go:322] 
	I0109 00:15:53.807353  451943 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:53.807457  451943 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:53.807531  451943 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:53.807541  451943 kubeadm.go:322] 
	I0109 00:15:53.807594  451943 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:53.807668  451943 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:53.807746  451943 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:53.807766  451943 kubeadm.go:322] 
	I0109 00:15:53.807884  451943 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0109 00:15:53.807989  451943 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:53.807998  451943 kubeadm.go:322] 
	I0109 00:15:53.808083  451943 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808215  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:53.808267  451943 kubeadm.go:322]     --control-plane 	  
	I0109 00:15:53.808282  451943 kubeadm.go:322] 
	I0109 00:15:53.808416  451943 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:53.808431  451943 kubeadm.go:322] 
	I0109 00:15:53.808535  451943 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808635  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:53.808646  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:15:53.808655  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:53.810445  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:52.478401  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.478468  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.812384  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:53.822034  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:53.841918  451943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:53.842007  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.842023  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=old-k8s-version-003293 minikube.k8s.io/updated_at=2024_01_09T00_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.878580  451943 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:54.119184  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:54.619596  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.119468  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.619508  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:52.969233  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.969384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.969570  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.978217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:59.478428  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.119299  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:56.620179  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.119526  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.619985  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.119330  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.619572  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.120142  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.619498  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.119329  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.620206  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.468767  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.969313  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.978314  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:03.979583  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.120279  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:01.619668  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.620169  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.120249  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.619563  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.619912  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.120243  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.620114  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.971649  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.468683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:05.980829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:08.479315  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.119938  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:06.619543  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.119220  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.619392  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.119991  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.619517  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.120205  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.620121  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.119909  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.273872  451943 kubeadm.go:1088] duration metric: took 16.431936842s to wait for elevateKubeSystemPrivileges.
	I0109 00:16:10.273910  451943 kubeadm.go:406] StartCluster complete in 5m46.185018744s
	I0109 00:16:10.273961  451943 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.274054  451943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:16:10.275851  451943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.276124  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:16:10.276261  451943 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:16:10.276362  451943 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276373  451943 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276388  451943 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-003293"
	I0109 00:16:10.276394  451943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-003293"
	I0109 00:16:10.276390  451943 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276415  451943 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-003293"
	W0109 00:16:10.276428  451943 addons.go:246] addon metrics-server should already be in state true
	I0109 00:16:10.276454  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:16:10.276481  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	W0109 00:16:10.276397  451943 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:16:10.276544  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.276864  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276880  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276867  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276941  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.276955  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.277062  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.294099  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I0109 00:16:10.294268  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0109 00:16:10.294410  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0109 00:16:10.294718  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294768  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294925  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.295279  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295305  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295388  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295419  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295397  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295480  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295693  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295769  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295788  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.296012  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.296310  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.296357  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.297119  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.297171  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.299887  451943 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-003293"
	W0109 00:16:10.299910  451943 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:16:10.299946  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.300224  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.300263  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.313007  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0109 00:16:10.313533  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.314010  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.314026  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.314437  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.314622  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.315598  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0109 00:16:10.316247  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.316532  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.318734  451943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:16:10.317343  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.317379  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0109 00:16:10.320285  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:16:10.320308  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:16:10.320329  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.320333  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.320705  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.320963  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.321103  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.321233  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.321247  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.321761  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.322210  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.322242  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.323835  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.324029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.324152  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.324177  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.326057  451943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:16:10.324406  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.328066  451943 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.328087  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:16:10.328096  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.328124  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.328784  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.329014  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.331395  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.331785  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.331810  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.332001  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.332191  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.332335  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.332480  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.347123  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0109 00:16:10.347716  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.348691  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.348719  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.349127  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.349342  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.350834  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.351133  451943 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.351149  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:16:10.351168  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.354189  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354621  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.354668  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354909  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.355119  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.355294  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.355481  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.515777  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.534034  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:16:10.534064  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:16:10.554850  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.584934  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:16:10.584964  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:16:10.615671  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:16:10.637303  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.637339  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:16:10.680679  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.830403  451943 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-003293" context rescaled to 1 replicas
	I0109 00:16:10.830449  451943 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:16:10.832633  451943 out.go:177] * Verifying Kubernetes components...
	I0109 00:16:10.834172  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:16:11.515705  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.515738  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516087  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.516123  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516132  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.516141  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.516151  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516389  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516407  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.571488  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.571524  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.571880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.571890  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.571911  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630216  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075317719s)
	I0109 00:16:11.630282  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630297  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.630308  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.014587881s)
	I0109 00:16:11.630345  451943 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0109 00:16:11.630710  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.630729  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630740  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.630751  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.631004  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.631032  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.631153  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.716276  451943 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.716463  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0357366s)
	I0109 00:16:11.716513  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716534  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.716848  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.716869  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.716878  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716889  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.717212  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.717222  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.717228  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.717245  451943 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-003293"
	I0109 00:16:11.719193  451943 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:16:08.968622  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.470234  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:10.479812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:12.984384  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.720570  451943 addons.go:508] enable addons completed in 1.44432074s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:16:11.733736  451943 node_ready.go:49] node "old-k8s-version-003293" has status "Ready":"True"
	I0109 00:16:11.733767  451943 node_ready.go:38] duration metric: took 17.451191ms waiting for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.733787  451943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:11.750301  451943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:13.762510  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:13.969774  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.468912  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:15.481249  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:17.978744  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:19.979938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.257523  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.259142  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.757454  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.469229  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.469761  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:22.478368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.978345  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:21.256765  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.256797  451943 pod_ready.go:81] duration metric: took 9.506455286s waiting for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.256807  451943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262633  451943 pod_ready.go:92] pod "kube-proxy-h8br2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.262651  451943 pod_ready.go:81] duration metric: took 5.836717ms waiting for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262660  451943 pod_ready.go:38] duration metric: took 9.52886361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:21.262697  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:16:21.262758  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:16:21.280249  451943 api_server.go:72] duration metric: took 10.449767566s to wait for apiserver process to appear ...
	I0109 00:16:21.280282  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:16:21.280305  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:16:21.286759  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:16:21.287885  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:16:21.287913  451943 api_server.go:131] duration metric: took 7.622726ms to wait for apiserver health ...
	I0109 00:16:21.287924  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:16:21.292745  451943 system_pods.go:59] 4 kube-system pods found
	I0109 00:16:21.292774  451943 system_pods.go:61] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.292782  451943 system_pods.go:61] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.292792  451943 system_pods.go:61] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.292799  451943 system_pods.go:61] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.292809  451943 system_pods.go:74] duration metric: took 4.87707ms to wait for pod list to return data ...
	I0109 00:16:21.292817  451943 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:16:21.295463  451943 default_sa.go:45] found service account: "default"
	I0109 00:16:21.295486  451943 default_sa.go:55] duration metric: took 2.661749ms for default service account to be created ...
	I0109 00:16:21.295495  451943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:16:21.299334  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.299369  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.299379  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.299389  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.299401  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.299419  451943 retry.go:31] will retry after 262.555966ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.567416  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.567444  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.567449  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.567456  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.567461  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.567483  451943 retry.go:31] will retry after 296.862413ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.869873  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.869910  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.869919  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.869932  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.869939  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.869960  451943 retry.go:31] will retry after 354.537219ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.229945  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.229973  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.229978  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.229985  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.229990  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.230008  451943 retry.go:31] will retry after 403.317754ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.639068  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.639100  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.639106  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.639115  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.639122  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.639145  451943 retry.go:31] will retry after 548.96975ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:23.193832  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:23.193865  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:23.193874  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:23.193884  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:23.193891  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:23.193912  451943 retry.go:31] will retry after 808.39734ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:24.007761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:24.007789  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:24.007794  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:24.007800  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:24.007805  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:24.007826  451943 retry.go:31] will retry after 1.084893616s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:25.097415  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:25.097446  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:25.097452  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:25.097461  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:25.097468  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:25.097488  451943 retry.go:31] will retry after 1.364718688s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.471347  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.968309  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.968540  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.981321  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:28.981763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.469277  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:26.469302  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:26.469308  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:26.469314  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:26.469319  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:26.469336  451943 retry.go:31] will retry after 1.608197445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.083522  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:28.083549  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:28.083554  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:28.083561  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:28.083566  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:28.083584  451943 retry.go:31] will retry after 1.803084046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:29.892783  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:29.892825  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:29.892834  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:29.892845  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:29.892852  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:29.892878  451943 retry.go:31] will retry after 2.500544298s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.970772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:30.972069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:31.478822  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:33.481537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:32.406761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:32.406791  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:32.406796  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:32.406803  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:32.406808  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:32.406826  451943 retry.go:31] will retry after 3.245901502s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:35.657591  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:35.657630  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:35.657636  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:35.657644  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:35.657650  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:35.657669  451943 retry.go:31] will retry after 2.987638992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:33.468927  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.968669  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.979914  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:37.982358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:38.652562  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:38.652589  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:38.652594  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:38.652600  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:38.652605  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:38.652621  451943 retry.go:31] will retry after 5.12035072s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:38.469167  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.469783  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.481402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:42.980559  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:43.778329  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:43.778358  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:43.778363  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:43.778370  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:43.778375  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:43.778392  451943 retry.go:31] will retry after 5.3812896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:42.972242  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.468157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.479217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:47.978368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.978994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.165092  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:49.165124  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:49.165129  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:49.165136  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:49.165142  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:49.165161  451943 retry.go:31] will retry after 8.788078847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:47.469586  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.968667  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.969102  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.979785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:53.984069  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:54.467285  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.469141  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.478629  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:58.479207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:57.958448  451943 system_pods.go:86] 5 kube-system pods found
	I0109 00:16:57.958475  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:57.958481  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Pending
	I0109 00:16:57.958485  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:57.958492  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:57.958497  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:57.958515  451943 retry.go:31] will retry after 8.563711001s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:58.470664  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.970608  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.481608  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:02.978829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:03.468919  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.469064  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.482545  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:07.979446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:06.528938  451943 system_pods.go:86] 6 kube-system pods found
	I0109 00:17:06.528963  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:06.528969  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:06.528973  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:06.528977  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:06.528987  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:06.528994  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:06.529016  451943 retry.go:31] will retry after 11.544909303s: missing components: etcd, kube-apiserver
	I0109 00:17:07.969131  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:09.969180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:10.479061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.480724  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.978853  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.468823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.469027  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:16.968659  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.081528  451943 system_pods.go:86] 8 kube-system pods found
	I0109 00:17:18.081568  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:18.081576  451943 system_pods.go:89] "etcd-old-k8s-version-003293" [f4516e0b-a960-4dc1-85c3-ae8197ded761] Running
	I0109 00:17:18.081583  451943 system_pods.go:89] "kube-apiserver-old-k8s-version-003293" [c5e83fe4-e95d-47ec-86a4-0615095ef746] Running
	I0109 00:17:18.081590  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:18.081596  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:18.081603  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:18.081613  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:18.081622  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:18.081636  451943 system_pods.go:126] duration metric: took 56.786133323s to wait for k8s-apps to be running ...
	I0109 00:17:18.081651  451943 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:17:18.081726  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:17:18.103798  451943 system_svc.go:56] duration metric: took 22.127635ms WaitForService to wait for kubelet.
	I0109 00:17:18.103844  451943 kubeadm.go:581] duration metric: took 1m7.273361806s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:17:18.103879  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:17:18.107740  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:17:18.107768  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:17:18.107803  451943 node_conditions.go:105] duration metric: took 3.918349ms to run NodePressure ...
	I0109 00:17:18.107814  451943 start.go:228] waiting for startup goroutines ...
	I0109 00:17:18.107826  451943 start.go:233] waiting for cluster config update ...
	I0109 00:17:18.107838  451943 start.go:242] writing updated cluster config ...
	I0109 00:17:18.108179  451943 ssh_runner.go:195] Run: rm -f paused
	I0109 00:17:18.161701  451943 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0109 00:17:18.163722  451943 out.go:177] 
	W0109 00:17:18.165269  451943 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0109 00:17:18.166781  451943 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0109 00:17:18.168422  451943 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-003293" cluster and "default" namespace by default
	I0109 00:17:16.980679  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:19.480507  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.969475  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.471739  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.978721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:24.478734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:23.968125  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:25.968375  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:26.483938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:28.979405  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:27.969238  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:29.969349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.973290  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.479085  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:33.978966  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:34.469294  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.967991  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.478328  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.481642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.970055  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:41.468509  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:40.978336  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:42.979499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:44.980394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:43.471069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:45.969083  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:47.479177  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:49.483109  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:48.469215  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:50.970448  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:51.979138  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:54.479275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:53.469152  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:55.470554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:56.480333  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:58.980818  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:57.968358  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:59.968498  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:01.485721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:03.980131  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:02.468272  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:04.469640  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:06.970010  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:05.981218  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:08.478827  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:09.469651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:11.970360  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:10.979972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:12.980174  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:14.470845  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:16.969297  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:15.479585  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:17.979035  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.979874  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.471447  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:21.473866  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:22.479239  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:24.979662  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:23.969077  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:26.469232  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:27.480054  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:29.978803  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:28.470397  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:30.968399  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:31.979175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:33.982180  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:32.467688  451984 pod_ready.go:81] duration metric: took 4m0.007315063s waiting for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	E0109 00:18:32.467715  451984 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:18:32.467724  451984 pod_ready.go:38] duration metric: took 4m2.010477321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:18:32.467740  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:18:32.467770  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:32.467841  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:32.540539  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.540568  451984 cri.go:89] found id: ""
	I0109 00:18:32.540578  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:32.540633  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.547617  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:32.547712  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:32.593446  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:32.593548  451984 cri.go:89] found id: ""
	I0109 00:18:32.593566  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:32.593622  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.598538  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:32.598630  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:32.641182  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.641217  451984 cri.go:89] found id: ""
	I0109 00:18:32.641227  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:32.641281  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.645529  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:32.645610  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:32.687187  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:32.687222  451984 cri.go:89] found id: ""
	I0109 00:18:32.687233  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:32.687299  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.691477  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:32.691551  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:32.730800  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:32.730834  451984 cri.go:89] found id: ""
	I0109 00:18:32.730853  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:32.730914  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.735372  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:32.735458  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:32.779326  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:32.779355  451984 cri.go:89] found id: ""
	I0109 00:18:32.779384  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:32.779528  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.784366  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:32.784444  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:32.825533  451984 cri.go:89] found id: ""
	I0109 00:18:32.825566  451984 logs.go:284] 0 containers: []
	W0109 00:18:32.825577  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:32.825586  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:32.825657  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:32.871429  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:32.871465  451984 cri.go:89] found id: ""
	I0109 00:18:32.871478  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:32.871546  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.876454  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:32.876483  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.931470  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:32.931518  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.976305  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:32.976344  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:33.421205  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:33.421256  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:33.436706  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:33.436752  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:33.605332  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:33.605369  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:33.653704  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:33.653746  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:33.697440  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:33.697489  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:33.753681  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:33.753728  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:33.798230  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:33.798271  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:33.862054  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:33.862089  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:33.942360  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:33.942549  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:33.965458  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:33.965503  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:34.012430  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012465  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:34.012554  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:34.012575  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:34.012583  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:34.012590  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012596  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:36.480501  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:38.979625  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:41.480903  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:43.978879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:44.014441  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:18:44.031831  451984 api_server.go:72] duration metric: took 4m15.676282348s to wait for apiserver process to appear ...
	I0109 00:18:44.031865  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:18:44.031906  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:44.031966  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:44.077138  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.077163  451984 cri.go:89] found id: ""
	I0109 00:18:44.077172  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:44.077232  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.081831  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:44.081906  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:44.121451  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.121474  451984 cri.go:89] found id: ""
	I0109 00:18:44.121482  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:44.121535  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.126070  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:44.126158  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:44.170657  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:44.170690  451984 cri.go:89] found id: ""
	I0109 00:18:44.170699  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:44.170753  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.175896  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:44.175977  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:44.220851  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.220877  451984 cri.go:89] found id: ""
	I0109 00:18:44.220886  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:44.220937  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.225006  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:44.225094  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:44.270073  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.270107  451984 cri.go:89] found id: ""
	I0109 00:18:44.270118  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:44.270188  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.275153  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:44.275245  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:44.318077  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:44.318111  451984 cri.go:89] found id: ""
	I0109 00:18:44.318122  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:44.318201  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.322475  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:44.322560  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:44.361736  451984 cri.go:89] found id: ""
	I0109 00:18:44.361773  451984 logs.go:284] 0 containers: []
	W0109 00:18:44.361784  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:44.361792  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:44.361864  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:44.404699  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:44.404726  451984 cri.go:89] found id: ""
	I0109 00:18:44.404737  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:44.404803  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.408753  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:44.408777  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.455119  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:44.455162  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.497680  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:44.497721  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:44.548809  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:44.548841  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:44.628959  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:44.629159  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:44.651315  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:44.651388  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:44.666013  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:44.666055  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.716269  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:44.716317  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.762681  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:44.762720  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:45.136682  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:45.136743  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:45.274971  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:45.275023  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:45.323164  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:45.323208  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:45.383823  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:45.383881  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:45.428483  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428516  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:45.428571  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:45.428579  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:45.428588  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:45.428601  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428608  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:45.980484  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:48.483446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:50.980210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:53.480495  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:55.429277  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:18:55.436812  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:18:55.438287  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:18:55.438316  451984 api_server.go:131] duration metric: took 11.40644287s to wait for apiserver health ...
	I0109 00:18:55.438327  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:18:55.438359  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:55.438433  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:55.485627  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:55.485654  451984 cri.go:89] found id: ""
	I0109 00:18:55.485664  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:55.485732  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.490219  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:55.490296  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:55.531890  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:55.531920  451984 cri.go:89] found id: ""
	I0109 00:18:55.531930  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:55.532002  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.536651  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:55.536724  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:55.579859  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:55.579909  451984 cri.go:89] found id: ""
	I0109 00:18:55.579921  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:55.579981  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.584894  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:55.584970  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:55.626833  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:55.626861  451984 cri.go:89] found id: ""
	I0109 00:18:55.626871  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:55.626940  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.631334  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:55.631449  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:55.675805  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:55.675831  451984 cri.go:89] found id: ""
	I0109 00:18:55.675843  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:55.675907  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.680727  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:55.680805  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:55.734757  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:55.734788  451984 cri.go:89] found id: ""
	I0109 00:18:55.734799  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:55.734867  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.739390  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:55.739464  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:55.785683  451984 cri.go:89] found id: ""
	I0109 00:18:55.785720  451984 logs.go:284] 0 containers: []
	W0109 00:18:55.785733  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:55.785741  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:55.785815  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:55.839983  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:55.840010  451984 cri.go:89] found id: ""
	I0109 00:18:55.840018  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:55.840066  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.844870  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:55.844897  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:55.979554  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:55.979600  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:56.023796  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:56.023840  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:56.070463  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:56.070512  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:56.116109  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:56.116142  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:56.505693  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:56.505742  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:56.566638  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:56.566683  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:56.649199  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.649372  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.670766  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:56.670809  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:56.719532  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:56.719574  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:56.763714  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:56.763758  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:56.825271  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:56.825324  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:56.869669  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:56.869717  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:56.890240  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890274  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:56.890355  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:56.890385  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.890395  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.890406  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890415  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:55.481178  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:57.979207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:59.980319  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:02.478816  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:04.478919  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:06.899277  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:19:06.899321  451984 system_pods.go:61] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.899329  451984 system_pods.go:61] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.899334  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.899338  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.899348  451984 system_pods.go:61] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.899352  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.899383  451984 system_pods.go:61] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.899395  451984 system_pods.go:61] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.899414  451984 system_pods.go:74] duration metric: took 11.461075857s to wait for pod list to return data ...
	I0109 00:19:06.899429  451984 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:19:06.903404  451984 default_sa.go:45] found service account: "default"
	I0109 00:19:06.903436  451984 default_sa.go:55] duration metric: took 3.995992ms for default service account to be created ...
	I0109 00:19:06.903448  451984 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:19:06.910497  451984 system_pods.go:86] 8 kube-system pods found
	I0109 00:19:06.910523  451984 system_pods.go:89] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.910528  451984 system_pods.go:89] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.910533  451984 system_pods.go:89] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.910537  451984 system_pods.go:89] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.910541  451984 system_pods.go:89] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.910545  451984 system_pods.go:89] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.910553  451984 system_pods.go:89] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.910558  451984 system_pods.go:89] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.910564  451984 system_pods.go:126] duration metric: took 7.110675ms to wait for k8s-apps to be running ...
	I0109 00:19:06.910571  451984 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:19:06.910616  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:19:06.927621  451984 system_svc.go:56] duration metric: took 17.036468ms WaitForService to wait for kubelet.
	I0109 00:19:06.927654  451984 kubeadm.go:581] duration metric: took 4m38.572113328s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:19:06.927677  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:19:06.931040  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:19:06.931071  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:19:06.931083  451984 node_conditions.go:105] duration metric: took 3.401351ms to run NodePressure ...
	I0109 00:19:06.931095  451984 start.go:228] waiting for startup goroutines ...
	I0109 00:19:06.931101  451984 start.go:233] waiting for cluster config update ...
	I0109 00:19:06.931113  451984 start.go:242] writing updated cluster config ...
	I0109 00:19:06.931454  451984 ssh_runner.go:195] Run: rm -f paused
	I0109 00:19:06.989366  451984 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:19:06.991673  451984 out.go:177] * Done! kubectl is now configured to use "embed-certs-845373" cluster and "default" namespace by default
	I0109 00:19:06.479508  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:08.978313  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:11.482400  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:13.979056  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:16.480908  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:18.481024  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:20.482252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:22.978703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:24.979574  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:26.979620  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:29.478426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:31.478540  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:33.478901  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:35.978875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:36.471149  452237 pod_ready.go:81] duration metric: took 4m0.000060952s waiting for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	E0109 00:19:36.471203  452237 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:19:36.471221  452237 pod_ready.go:38] duration metric: took 4m3.426617855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:19:36.471243  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:19:36.471314  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:36.471400  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:36.539330  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:36.539370  452237 cri.go:89] found id: ""
	I0109 00:19:36.539383  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:36.539446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.544259  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:36.544339  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:36.591395  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:36.591437  452237 cri.go:89] found id: ""
	I0109 00:19:36.591448  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:36.591520  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.596454  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:36.596523  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:36.641041  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:36.641070  452237 cri.go:89] found id: ""
	I0109 00:19:36.641082  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:36.641145  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.645716  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:36.645798  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:36.686577  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:36.686607  452237 cri.go:89] found id: ""
	I0109 00:19:36.686618  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:36.686686  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.690744  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:36.690824  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:36.733504  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:36.733534  452237 cri.go:89] found id: ""
	I0109 00:19:36.733544  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:36.733613  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.738581  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:36.738663  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:36.783280  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:36.783314  452237 cri.go:89] found id: ""
	I0109 00:19:36.783326  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:36.783419  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.788101  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:36.788171  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:36.839094  452237 cri.go:89] found id: ""
	I0109 00:19:36.839124  452237 logs.go:284] 0 containers: []
	W0109 00:19:36.839133  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:36.839139  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:36.839201  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:36.880203  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:36.880236  452237 cri.go:89] found id: ""
	I0109 00:19:36.880247  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:36.880329  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.884703  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:36.884732  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:36.900132  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:36.900175  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:37.044558  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:37.044596  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:37.090555  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:37.090601  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:37.550107  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:37.550164  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:37.608267  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:37.608316  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:37.689186  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:37.689447  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:37.712896  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:37.712958  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:37.766035  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:37.766078  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:37.814072  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:37.814111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:37.858686  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:37.858725  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:37.912616  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:37.912661  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:37.973080  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:37.973129  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:38.016941  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.016989  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:38.017072  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:38.017088  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:38.017101  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:38.017118  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.017128  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:48.018753  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:19:48.040302  452237 api_server.go:72] duration metric: took 4m15.967717255s to wait for apiserver process to appear ...
	I0109 00:19:48.040335  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:19:48.040382  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:48.040539  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:48.105058  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.105084  452237 cri.go:89] found id: ""
	I0109 00:19:48.105095  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:48.105158  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.110067  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:48.110165  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:48.153350  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.153383  452237 cri.go:89] found id: ""
	I0109 00:19:48.153394  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:48.153464  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.158284  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:48.158355  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:48.205447  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.205480  452237 cri.go:89] found id: ""
	I0109 00:19:48.205492  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:48.205572  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.210254  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:48.210353  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:48.253594  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:48.253624  452237 cri.go:89] found id: ""
	I0109 00:19:48.253633  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:48.253700  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.259160  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:48.259229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:48.302358  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.302383  452237 cri.go:89] found id: ""
	I0109 00:19:48.302393  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:48.302446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.308134  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:48.308229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:48.349632  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:48.349656  452237 cri.go:89] found id: ""
	I0109 00:19:48.349664  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:48.349715  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.354626  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:48.354693  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:48.400501  452237 cri.go:89] found id: ""
	I0109 00:19:48.400535  452237 logs.go:284] 0 containers: []
	W0109 00:19:48.400547  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:48.400555  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:48.400626  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:48.444607  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:48.444631  452237 cri.go:89] found id: ""
	I0109 00:19:48.444641  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:48.444710  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.448965  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:48.449000  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:48.496050  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:48.496085  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:48.620778  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:48.620812  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.688155  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:48.688204  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.745755  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:48.745792  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.786141  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:48.786195  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.833422  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:48.833456  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:49.231467  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:49.231508  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:49.315139  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.315313  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.337901  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:49.337942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:49.353452  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:49.353494  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:49.409069  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:49.409111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:49.466267  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:49.466311  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:49.512720  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512762  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:49.512838  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:49.512858  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.512868  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.512882  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512891  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:59.513828  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:19:59.518896  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:19:59.520439  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:19:59.520463  452237 api_server.go:131] duration metric: took 11.480122148s to wait for apiserver health ...
	I0109 00:19:59.520479  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:19:59.520504  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:59.520549  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:59.566636  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:59.566669  452237 cri.go:89] found id: ""
	I0109 00:19:59.566680  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:59.566773  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.570754  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:59.570817  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:59.612286  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:59.612314  452237 cri.go:89] found id: ""
	I0109 00:19:59.612326  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:59.612399  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.618705  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:59.618778  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:59.666381  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:59.666408  452237 cri.go:89] found id: ""
	I0109 00:19:59.666417  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:59.666468  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.672155  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:59.672242  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:59.712973  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:59.712997  452237 cri.go:89] found id: ""
	I0109 00:19:59.713005  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:59.713068  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.717181  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:59.717261  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:59.762121  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:59.762153  452237 cri.go:89] found id: ""
	I0109 00:19:59.762163  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:59.762236  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.766573  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:59.766630  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:59.812202  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:59.812233  452237 cri.go:89] found id: ""
	I0109 00:19:59.812246  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:59.812309  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.817529  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:59.817615  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:59.865373  452237 cri.go:89] found id: ""
	I0109 00:19:59.865402  452237 logs.go:284] 0 containers: []
	W0109 00:19:59.865410  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:59.865417  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:59.865486  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:59.914250  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:59.914273  452237 cri.go:89] found id: ""
	I0109 00:19:59.914283  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:59.914369  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.918360  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:59.918391  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:59.999676  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:59.999875  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.022457  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:20:00.022496  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:20:00.082902  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:20:00.082942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:20:00.127886  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:20:00.127933  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:20:00.168705  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:20:00.168737  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:20:00.554704  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:20:00.554751  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:20:00.604427  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:20:00.604462  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:20:00.618923  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:20:00.618954  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:20:00.747443  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:20:00.747475  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:20:00.802652  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:20:00.802691  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:20:00.849279  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:20:00.849318  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:20:00.887879  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:20:00.887919  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:20:00.951894  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.951928  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:20:00.951999  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:20:00.952011  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:20:00.952019  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.952030  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.952035  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:20:10.962675  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:20:10.962706  452237 system_pods.go:61] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.962711  452237 system_pods.go:61] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.962716  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.962721  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.962725  452237 system_pods.go:61] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.962729  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.962735  452237 system_pods.go:61] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.962740  452237 system_pods.go:61] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.962747  452237 system_pods.go:74] duration metric: took 11.442261888s to wait for pod list to return data ...
	I0109 00:20:10.962755  452237 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:20:10.965782  452237 default_sa.go:45] found service account: "default"
	I0109 00:20:10.965808  452237 default_sa.go:55] duration metric: took 3.046646ms for default service account to be created ...
	I0109 00:20:10.965817  452237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:20:10.972286  452237 system_pods.go:86] 8 kube-system pods found
	I0109 00:20:10.972323  452237 system_pods.go:89] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.972331  452237 system_pods.go:89] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.972340  452237 system_pods.go:89] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.972349  452237 system_pods.go:89] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.972356  452237 system_pods.go:89] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.972366  452237 system_pods.go:89] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.972381  452237 system_pods.go:89] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.972392  452237 system_pods.go:89] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.972406  452237 system_pods.go:126] duration metric: took 6.583119ms to wait for k8s-apps to be running ...
	I0109 00:20:10.972427  452237 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:20:10.972490  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:20:10.992310  452237 system_svc.go:56] duration metric: took 19.873367ms WaitForService to wait for kubelet.
	I0109 00:20:10.992340  452237 kubeadm.go:581] duration metric: took 4m38.919766965s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:20:10.992363  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:20:10.996337  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:20:10.996373  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:20:10.996390  452237 node_conditions.go:105] duration metric: took 4.019869ms to run NodePressure ...
	I0109 00:20:10.996405  452237 start.go:228] waiting for startup goroutines ...
	I0109 00:20:10.996414  452237 start.go:233] waiting for cluster config update ...
	I0109 00:20:10.996429  452237 start.go:242] writing updated cluster config ...
	I0109 00:20:10.996742  452237 ssh_runner.go:195] Run: rm -f paused
	I0109 00:20:11.052916  452237 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0109 00:20:11.055339  452237 out.go:177] * Done! kubectl is now configured to use "no-preload-378213" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:09:41 UTC, ends at Tue 2024-01-09 00:23:45 UTC. --
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.544144164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759825544127472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4cd70cfe-823d-454c-8985-3c12ddd27e5c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.544805989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=721446cd-971e-4967-9380-cbff04772c86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.544881314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=721446cd-971e-4967-9380-cbff04772c86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.545127793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=721446cd-971e-4967-9380-cbff04772c86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.594121945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=78cc6bcb-a117-42c6-b6bc-4f49f1def692 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.594184496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=78cc6bcb-a117-42c6-b6bc-4f49f1def692 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.595441604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=42ccad1d-fc36-4513-83e6-e2ca0e387539 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.595926102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759825595909820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=42ccad1d-fc36-4513-83e6-e2ca0e387539 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.596742035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d74242a9-c54c-45e5-b470-e7810e9a8cb7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.596832410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d74242a9-c54c-45e5-b470-e7810e9a8cb7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.597137695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d74242a9-c54c-45e5-b470-e7810e9a8cb7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.642202069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6912662c-bc66-4254-9b96-ed00a0521cbf name=/runtime.v1.RuntimeService/Version
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.642284905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6912662c-bc66-4254-9b96-ed00a0521cbf name=/runtime.v1.RuntimeService/Version
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.643863272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5a6ee71b-7367-46af-a6c1-4833361bbf96 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.644363739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759825644350106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5a6ee71b-7367-46af-a6c1-4833361bbf96 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.644813152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=921b04db-6a51-4931-b2be-21ed6aaf4072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.644893049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=921b04db-6a51-4931-b2be-21ed6aaf4072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.645153601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=921b04db-6a51-4931-b2be-21ed6aaf4072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.681523896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=88fe3769-0d3e-48d7-adfb-f3d6ac95cc2e name=/runtime.v1.RuntimeService/Version
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.681593310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=88fe3769-0d3e-48d7-adfb-f3d6ac95cc2e name=/runtime.v1.RuntimeService/Version
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.683039926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=996bccd5-41a0-4083-aad0-d74436f5ee33 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.683391205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759825683377993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=996bccd5-41a0-4083-aad0-d74436f5ee33 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.683841443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28c3c3ab-8325-4cb8-b834-8e359d191c19 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.683886175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28c3c3ab-8325-4cb8-b834-8e359d191c19 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:23:45 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:23:45.684197017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28c3c3ab-8325-4cb8-b834-8e359d191c19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a0fd42aafbd15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   c309a3c21eeb1       storage-provisioner
	a342658092873       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   80c1ed307bf19       busybox
	bd1948e3c50bc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   41ec024e32ec6       coredns-5dd5756b68-csrwr
	301f60b371271       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   5b80a8bc10622       kube-proxy-p9dmf
	f2c5c87fdbe85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   c309a3c21eeb1       storage-provisioner
	8cc2cc6a6ffc0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   2b5c57c5143c7       etcd-default-k8s-diff-port-834116
	a457619a25952       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   ee81328024e4d       kube-scheduler-default-k8s-diff-port-834116
	2a0d4cebebe6e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   a3bb290f4c4fd       kube-controller-manager-default-k8s-diff-port-834116
	fc9430c284b97       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   56c2cc0fac5c5       kube-apiserver-default-k8s-diff-port-834116
	
	
	==> coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46273 - 35817 "HINFO IN 2123054911538060451.4617250452686183186. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036280919s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-834116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-834116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=default-k8s-diff-port-834116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_01_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:01:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-834116
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:23:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:21:03 +0000   Tue, 09 Jan 2024 00:01:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:21:03 +0000   Tue, 09 Jan 2024 00:01:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:21:03 +0000   Tue, 09 Jan 2024 00:01:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:21:03 +0000   Tue, 09 Jan 2024 00:10:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    default-k8s-diff-port-834116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5291690b871f4ffe8e230bce47c5b516
	  System UUID:                5291690b-871f-4ffe-8e23-0bce47c5b516
	  Boot ID:                    995ef9c1-c726-4e38-ac79-d0e4b66e8941
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-csrwr                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-834116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-834116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-834116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-p9dmf                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-834116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-mbf7k                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-834116 event: Registered Node default-k8s-diff-port-834116 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-834116 event: Registered Node default-k8s-diff-port-834116 in Controller
	
	
	==> dmesg <==
	[Jan 9 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.645947] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.730312] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.170046] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.563631] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.511081] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.133780] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.167897] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.120647] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.235724] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Jan 9 00:10] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +21.023692] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] <==
	{"level":"info","ts":"2024-01-09T00:10:17.753054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:10:17.753159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:10:17.753378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.73:2379"}
	{"level":"warn","ts":"2024-01-09T00:10:21.886241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.773766ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9419433563977199390 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.73\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.39.73\" value_size:66 lease:196061527122423580 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.73\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-09T00:10:21.886359Z","caller":"traceutil/trace.go:171","msg":"trace[201594983] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"559.66692ms","start":"2024-01-09T00:10:21.32668Z","end":"2024-01-09T00:10:21.886347Z","steps":["trace[201594983] 'process raft request'  (duration: 174.160557ms)","trace[201594983] 'compare'  (duration: 384.65167ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:10:21.886406Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:21.326622Z","time spent":"559.762107ms","remote":"127.0.0.1:44106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.73\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.39.73\" value_size:66 lease:196061527122423580 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.73\" > >"}
	{"level":"warn","ts":"2024-01-09T00:10:22.620156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"546.843425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-01-09T00:10:22.620628Z","caller":"traceutil/trace.go:171","msg":"trace[763981380] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:523; }","duration":"547.307639ms","start":"2024-01-09T00:10:22.07323Z","end":"2024-01-09T00:10:22.620538Z","steps":["trace[763981380] 'range keys from in-memory index tree'  (duration: 546.750006ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:22.620779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.073217Z","time spent":"547.546169ms","remote":"127.0.0.1:44144","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" "}
	{"level":"warn","ts":"2024-01-09T00:10:22.620936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"581.645425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/\" range_end:\"/registry/deployments/kube-system0\" ","response":"range_response_count:2 size:9309"}
	{"level":"info","ts":"2024-01-09T00:10:22.621651Z","caller":"traceutil/trace.go:171","msg":"trace[1574384919] range","detail":"{range_begin:/registry/deployments/kube-system/; range_end:/registry/deployments/kube-system0; response_count:2; response_revision:523; }","duration":"582.365155ms","start":"2024-01-09T00:10:22.039269Z","end":"2024-01-09T00:10:22.621634Z","steps":["trace[1574384919] 'range keys from in-memory index tree'  (duration: 581.536354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:22.621692Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.039254Z","time spent":"582.425301ms","remote":"127.0.0.1:44200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":2,"response size":9332,"request content":"key:\"/registry/deployments/kube-system/\" range_end:\"/registry/deployments/kube-system0\" "}
	{"level":"info","ts":"2024-01-09T00:10:23.050163Z","caller":"traceutil/trace.go:171","msg":"trace[566505004] linearizableReadLoop","detail":"{readStateIndex:552; appliedIndex:551; }","duration":"407.572451ms","start":"2024-01-09T00:10:22.642578Z","end":"2024-01-09T00:10:23.05015Z","steps":["trace[566505004] 'read index received'  (duration: 407.374013ms)","trace[566505004] 'applied index is now lower than readState.Index'  (duration: 197.738µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:10:23.050321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.730468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-01-09T00:10:23.050386Z","caller":"traceutil/trace.go:171","msg":"trace[812758295] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/token-cleaner; range_end:; response_count:1; response_revision:523; }","duration":"407.81185ms","start":"2024-01-09T00:10:22.642564Z","end":"2024-01-09T00:10:23.050376Z","steps":["trace[812758295] 'agreement among raft nodes before linearized reading'  (duration: 407.6933ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:23.050439Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.642557Z","time spent":"407.872495ms","remote":"127.0.0.1:44144","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":214,"request content":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" "}
	{"level":"info","ts":"2024-01-09T00:10:23.05085Z","caller":"traceutil/trace.go:171","msg":"trace[844840029] transaction","detail":"{read_only:false; number_of_response:0; response_revision:523; }","duration":"409.100554ms","start":"2024-01-09T00:10:22.641742Z","end":"2024-01-09T00:10:23.050842Z","steps":["trace[844840029] 'process raft request'  (duration: 408.335289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:23.05116Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.641727Z","time spent":"409.179023ms","remote":"127.0.0.1:44176","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:coredns\" value_size:328 >> failure:<>"}
	{"level":"info","ts":"2024-01-09T00:10:23.296351Z","caller":"traceutil/trace.go:171","msg":"trace[926543455] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"133.4495ms","start":"2024-01-09T00:10:23.162889Z","end":"2024-01-09T00:10:23.296339Z","steps":["trace[926543455] 'read index received'  (duration: 133.355795ms)","trace[926543455] 'applied index is now lower than readState.Index'  (duration: 93.296µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:10:23.296513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.619846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-01-09T00:10:23.296555Z","caller":"traceutil/trace.go:171","msg":"trace[434775922] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pvc-protection-controller; range_end:; response_count:1; response_revision:523; }","duration":"133.677564ms","start":"2024-01-09T00:10:23.16287Z","end":"2024-01-09T00:10:23.296548Z","steps":["trace[434775922] 'agreement among raft nodes before linearized reading'  (duration: 133.591456ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:10:23.296731Z","caller":"traceutil/trace.go:171","msg":"trace[815516781] transaction","detail":"{read_only:false; number_of_response:0; response_revision:523; }","duration":"135.862499ms","start":"2024-01-09T00:10:23.160864Z","end":"2024-01-09T00:10:23.296727Z","steps":["trace[815516781] 'process raft request'  (duration: 135.428048ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:20:18.065991Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
	{"level":"info","ts":"2024-01-09T00:20:18.068765Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":862,"took":"2.054428ms","hash":1354971634}
	{"level":"info","ts":"2024-01-09T00:20:18.068851Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1354971634,"revision":862,"compact-revision":-1}
	
	
	==> kernel <==
	 00:23:46 up 14 min,  0 users,  load average: 0.12, 0.16, 0.15
	Linux default-k8s-diff-port-834116 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] <==
	I0109 00:20:20.107067       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:20:21.107387       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:20:21.107560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:20:21.107600       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:20:21.107461       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:20:21.107707       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:20:21.108707       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:21:19.896429       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:21:21.107782       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:21:21.108130       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:21:21.108173       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:21:21.109258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:21:21.109414       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:21:21.109469       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:22:19.896305       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0109 00:23:19.896368       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:23:21.108775       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:23:21.108933       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:23:21.109031       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:23:21.110079       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:23:21.110215       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:23:21.110300       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] <==
	I0109 00:18:04.537583       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:18:34.035371       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:18:34.550334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:19:04.040900       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:19:04.559063       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:19:34.048098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:19:34.570613       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:20:04.054644       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:20:04.579502       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:20:34.062033       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:20:34.587768       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:21:04.067423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:21:04.596700       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:21:34.073779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:21:34.608675       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:21:38.388391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="437.383µs"
	I0109 00:21:50.390857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="121.551µs"
	E0109 00:22:04.080451       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:22:04.619344       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:22:34.088363       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:22:34.630741       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:23:04.094427       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:23:04.639801       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:23:34.102175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:23:34.650738       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] <==
	I0109 00:10:23.769296       1 server_others.go:69] "Using iptables proxy"
	I0109 00:10:23.788755       1 node.go:141] Successfully retrieved node IP: 192.168.39.73
	I0109 00:10:23.850374       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:10:23.850677       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:10:23.854843       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:10:23.854917       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:10:23.855298       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:10:23.855338       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:10:23.856253       1 config.go:188] "Starting service config controller"
	I0109 00:10:23.856304       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:10:23.856339       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:10:23.856354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:10:23.858836       1 config.go:315] "Starting node config controller"
	I0109 00:10:23.858949       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:10:23.958591       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:10:23.958738       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:10:23.959790       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] <==
	I0109 00:10:17.954301       1 serving.go:348] Generated self-signed cert in-memory
	W0109 00:10:19.976729       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0109 00:10:19.976875       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:10:19.976922       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0109 00:10:19.977020       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0109 00:10:20.102192       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0109 00:10:20.102302       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:10:20.108804       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0109 00:10:20.109234       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0109 00:10:20.109289       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0109 00:10:20.154875       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 00:10:20.256120       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:09:41 UTC, ends at Tue 2024-01-09 00:23:46 UTC. --
	Jan 09 00:21:13 default-k8s-diff-port-834116 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:21:13 default-k8s-diff-port-834116 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:21:13 default-k8s-diff-port-834116 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:21:23 default-k8s-diff-port-834116 kubelet[918]: E0109 00:21:23.380854     918 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 09 00:21:23 default-k8s-diff-port-834116 kubelet[918]: E0109 00:21:23.380916     918 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 09 00:21:23 default-k8s-diff-port-834116 kubelet[918]: E0109 00:21:23.381326     918 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8k95t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-mbf7k_kube-system(61b7ea36-0b24-42e9-9937-d20ea545f63d): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:21:23 default-k8s-diff-port-834116 kubelet[918]: E0109 00:21:23.381373     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:21:38 default-k8s-diff-port-834116 kubelet[918]: E0109 00:21:38.370557     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:21:50 default-k8s-diff-port-834116 kubelet[918]: E0109 00:21:50.369882     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:22:04 default-k8s-diff-port-834116 kubelet[918]: E0109 00:22:04.369639     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:22:13 default-k8s-diff-port-834116 kubelet[918]: E0109 00:22:13.497808     918 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:22:13 default-k8s-diff-port-834116 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:22:13 default-k8s-diff-port-834116 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:22:13 default-k8s-diff-port-834116 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:22:15 default-k8s-diff-port-834116 kubelet[918]: E0109 00:22:15.373415     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:22:27 default-k8s-diff-port-834116 kubelet[918]: E0109 00:22:27.370502     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:22:39 default-k8s-diff-port-834116 kubelet[918]: E0109 00:22:39.370061     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:22:52 default-k8s-diff-port-834116 kubelet[918]: E0109 00:22:52.369742     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:23:07 default-k8s-diff-port-834116 kubelet[918]: E0109 00:23:07.369037     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:23:13 default-k8s-diff-port-834116 kubelet[918]: E0109 00:23:13.499256     918 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:23:13 default-k8s-diff-port-834116 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:23:13 default-k8s-diff-port-834116 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:23:13 default-k8s-diff-port-834116 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:23:22 default-k8s-diff-port-834116 kubelet[918]: E0109 00:23:22.369889     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:23:35 default-k8s-diff-port-834116 kubelet[918]: E0109 00:23:35.370155     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	
	
	==> storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] <==
	I0109 00:10:54.765586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:10:54.777428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:10:54.777503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:11:12.194279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:11:12.195252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-834116_acb4fb37-e836-4f25-8d20-910d7da56b23!
	I0109 00:11:12.196821       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9cd4331f-223f-4ccb-8942-664734695597", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-834116_acb4fb37-e836-4f25-8d20-910d7da56b23 became leader
	I0109 00:11:12.295868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-834116_acb4fb37-e836-4f25-8d20-910d7da56b23!
	
	
	==> storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] <==
	I0109 00:10:23.772413       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0109 00:10:53.774877       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mbf7k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 describe pod metrics-server-57f55c9bc5-mbf7k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-834116 describe pod metrics-server-57f55c9bc5-mbf7k: exit status 1 (73.430239ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mbf7k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-834116 describe pod metrics-server-57f55c9bc5-mbf7k: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:17:20.222759  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:17:36.727771  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0109 00:17:51.339546  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:18:00.810933  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:18:36.013142  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:18:43.266651  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003293 -n old-k8s-version-003293
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:26:18.804013773 +0000 UTC m=+5677.680963983
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-003293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-003293 logs -n 25: (1.780150076s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo cat                              | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:05:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:05:27.711531  452488 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:05:27.711728  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711742  452488 out.go:309] Setting ErrFile to fd 2...
	I0109 00:05:27.711750  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711982  452488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:05:27.712562  452488 out.go:303] Setting JSON to false
	I0109 00:05:27.713635  452488 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17254,"bootTime":1704741474,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:05:27.713709  452488 start.go:138] virtualization: kvm guest
	I0109 00:05:27.716110  452488 out.go:177] * [default-k8s-diff-port-834116] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:05:27.718021  452488 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:05:27.719311  452488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:05:27.718049  452488 notify.go:220] Checking for updates...
	I0109 00:05:27.720754  452488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:05:27.722073  452488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:05:27.723496  452488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:05:27.724923  452488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:05:27.726663  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:05:27.727158  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.727261  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.741812  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0109 00:05:27.742300  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.742911  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.742943  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.743249  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.743438  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.743694  452488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:05:27.743987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.744027  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.758231  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I0109 00:05:27.758620  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.759039  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.759069  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.759349  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.759570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.797687  452488 out.go:177] * Using the kvm2 driver based on existing profile
	I0109 00:05:27.799282  452488 start.go:298] selected driver: kvm2
	I0109 00:05:27.799301  452488 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.799485  452488 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:05:27.800156  452488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.800240  452488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:05:27.815851  452488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:05:27.816303  452488 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:05:27.816371  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:05:27.816384  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:05:27.816406  452488 start_flags.go:323] config:
	{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-83411
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.816592  452488 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.818643  452488 out.go:177] * Starting control plane node default-k8s-diff-port-834116 in cluster default-k8s-diff-port-834116
	I0109 00:05:30.179677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:27.820207  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:05:27.820246  452488 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0109 00:05:27.820258  452488 cache.go:56] Caching tarball of preloaded images
	I0109 00:05:27.820344  452488 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:05:27.820354  452488 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:05:27.820455  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:05:27.820632  452488 start.go:365] acquiring machines lock for default-k8s-diff-port-834116: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:05:33.251703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:39.331707  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:42.403645  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:48.483635  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:51.555692  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:57.635653  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:00.707722  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:06.787696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:09.859664  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:15.939733  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:19.011687  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:25.091759  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:28.163666  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:34.243673  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:37.315693  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:43.395652  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:46.467622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:52.547639  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:55.619655  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:01.699734  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:04.771686  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:10.851703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:13.923711  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:20.003883  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:23.075726  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:29.155735  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:32.227698  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:38.307696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:41.379724  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:47.459727  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:50.531708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:56.611621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:59.683677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:05.763622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:08.835708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:14.915674  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:17.987706  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:24.067730  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:27.139621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:33.219667  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:36.291651  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:42.371678  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:45.443660  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:48.448024  451984 start.go:369] acquired machines lock for "embed-certs-845373" in 4m36.156097213s
	I0109 00:08:48.448197  451984 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:08:48.448239  451984 fix.go:54] fixHost starting: 
	I0109 00:08:48.448769  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:08:48.448810  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:08:48.464359  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0109 00:08:48.465014  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:08:48.465634  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:08:48.465669  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:08:48.466022  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:08:48.466241  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:08:48.466431  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:08:48.468132  451984 fix.go:102] recreateIfNeeded on embed-certs-845373: state=Stopped err=<nil>
	I0109 00:08:48.468162  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	W0109 00:08:48.468339  451984 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:08:48.470346  451984 out.go:177] * Restarting existing kvm2 VM for "embed-certs-845373" ...
	I0109 00:08:48.445374  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:08:48.445415  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:08:48.447757  451943 machine.go:91] provisioned docker machine in 4m37.407825673s
	I0109 00:08:48.447823  451943 fix.go:56] fixHost completed within 4m37.428599196s
	I0109 00:08:48.447831  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 4m37.428619873s
	W0109 00:08:48.447876  451943 start.go:694] error starting host: provision: host is not running
	W0109 00:08:48.448289  451943 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0109 00:08:48.448305  451943 start.go:709] Will try again in 5 seconds ...
	I0109 00:08:48.471819  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Start
	I0109 00:08:48.471966  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring networks are active...
	I0109 00:08:48.472753  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network default is active
	I0109 00:08:48.473111  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network mk-embed-certs-845373 is active
	I0109 00:08:48.473441  451984 main.go:141] libmachine: (embed-certs-845373) Getting domain xml...
	I0109 00:08:48.474114  451984 main.go:141] libmachine: (embed-certs-845373) Creating domain...
	I0109 00:08:49.716628  451984 main.go:141] libmachine: (embed-certs-845373) Waiting to get IP...
	I0109 00:08:49.717606  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.718022  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.718080  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.717994  452995 retry.go:31] will retry after 247.787821ms: waiting for machine to come up
	I0109 00:08:49.967655  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.968169  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.968203  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.968101  452995 retry.go:31] will retry after 339.65094ms: waiting for machine to come up
	I0109 00:08:50.309542  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.310008  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.310041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.309944  452995 retry.go:31] will retry after 475.654088ms: waiting for machine to come up
	I0109 00:08:50.787560  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.787930  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.787973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.787876  452995 retry.go:31] will retry after 437.198744ms: waiting for machine to come up
	I0109 00:08:51.226414  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.226866  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.226901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.226817  452995 retry.go:31] will retry after 501.606265ms: waiting for machine to come up
	I0109 00:08:51.730571  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.731041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.731084  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.730949  452995 retry.go:31] will retry after 707.547375ms: waiting for machine to come up
	I0109 00:08:53.450389  451943 start.go:365] acquiring machines lock for old-k8s-version-003293: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:08:52.440038  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:52.440373  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:52.440434  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:52.440330  452995 retry.go:31] will retry after 1.02016439s: waiting for machine to come up
	I0109 00:08:53.462628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:53.463090  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:53.463120  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:53.463037  452995 retry.go:31] will retry after 1.322196175s: waiting for machine to come up
	I0109 00:08:54.786979  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:54.787514  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:54.787540  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:54.787465  452995 retry.go:31] will retry after 1.260135214s: waiting for machine to come up
	I0109 00:08:56.049973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:56.050450  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:56.050478  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:56.050415  452995 retry.go:31] will retry after 1.476819521s: waiting for machine to come up
	I0109 00:08:57.529060  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:57.529497  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:57.529527  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:57.529444  452995 retry.go:31] will retry after 2.830903204s: waiting for machine to come up
	I0109 00:09:00.362901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:00.363333  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:00.363372  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:00.363292  452995 retry.go:31] will retry after 3.093040214s: waiting for machine to come up
	I0109 00:09:03.460541  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:03.461066  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:03.461103  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:03.461032  452995 retry.go:31] will retry after 3.190401984s: waiting for machine to come up
	I0109 00:09:06.654729  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655295  451984 main.go:141] libmachine: (embed-certs-845373) Found IP for machine: 192.168.50.132
	I0109 00:09:06.655331  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has current primary IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655343  451984 main.go:141] libmachine: (embed-certs-845373) Reserving static IP address...
	I0109 00:09:06.655828  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.655851  451984 main.go:141] libmachine: (embed-certs-845373) DBG | skip adding static IP to network mk-embed-certs-845373 - found existing host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"}
	I0109 00:09:06.655865  451984 main.go:141] libmachine: (embed-certs-845373) Reserved static IP address: 192.168.50.132
	I0109 00:09:06.655880  451984 main.go:141] libmachine: (embed-certs-845373) Waiting for SSH to be available...
	I0109 00:09:06.655969  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Getting to WaitForSSH function...
	I0109 00:09:06.658083  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658468  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.658501  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658615  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH client type: external
	I0109 00:09:06.658650  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa (-rw-------)
	I0109 00:09:06.658704  451984 main.go:141] libmachine: (embed-certs-845373) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:06.658725  451984 main.go:141] libmachine: (embed-certs-845373) DBG | About to run SSH command:
	I0109 00:09:06.658741  451984 main.go:141] libmachine: (embed-certs-845373) DBG | exit 0
	I0109 00:09:06.751337  451984 main.go:141] libmachine: (embed-certs-845373) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:06.751683  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetConfigRaw
	I0109 00:09:06.752338  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:06.754749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755133  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.755161  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755475  451984 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/config.json ...
	I0109 00:09:06.755689  451984 machine.go:88] provisioning docker machine ...
	I0109 00:09:06.755710  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:06.755939  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756108  451984 buildroot.go:166] provisioning hostname "embed-certs-845373"
	I0109 00:09:06.756133  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756287  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.758391  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758651  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.758678  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758821  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.759026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759151  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759276  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.759419  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.759891  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.759906  451984 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-845373 && echo "embed-certs-845373" | sudo tee /etc/hostname
	I0109 00:09:06.897829  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845373
	
	I0109 00:09:06.897862  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.900776  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901151  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.901194  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901354  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.901601  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901767  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.902093  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.902429  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.902457  451984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-845373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-845373/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-845373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:07.035051  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:07.035088  451984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:07.035106  451984 buildroot.go:174] setting up certificates
	I0109 00:09:07.035141  451984 provision.go:83] configureAuth start
	I0109 00:09:07.035150  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:07.035470  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.038830  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039185  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.039216  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039473  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.041628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.041978  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.042006  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.042138  451984 provision.go:138] copyHostCerts
	I0109 00:09:07.042215  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:07.042235  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:07.042301  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:07.042386  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:07.042394  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:07.042420  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:07.042547  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:07.042557  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:07.042582  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:07.042658  451984 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.embed-certs-845373 san=[192.168.50.132 192.168.50.132 localhost 127.0.0.1 minikube embed-certs-845373]
	I0109 00:09:07.146928  451984 provision.go:172] copyRemoteCerts
	I0109 00:09:07.147000  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:07.147026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.149665  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.149999  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.150025  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.150190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.150402  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.150624  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.150778  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.912619  452237 start.go:369] acquired machines lock for "no-preload-378213" in 4m22.586847609s
	I0109 00:09:07.912695  452237 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:07.912705  452237 fix.go:54] fixHost starting: 
	I0109 00:09:07.913160  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:07.913205  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:07.929558  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0109 00:09:07.930071  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:07.930620  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:09:07.930646  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:07.931015  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:07.931232  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:07.931421  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:09:07.933075  452237 fix.go:102] recreateIfNeeded on no-preload-378213: state=Stopped err=<nil>
	I0109 00:09:07.933114  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	W0109 00:09:07.933281  452237 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:07.935418  452237 out.go:177] * Restarting existing kvm2 VM for "no-preload-378213" ...
	I0109 00:09:07.246432  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:07.270463  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0109 00:09:07.294094  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:09:07.317414  451984 provision.go:86] duration metric: configureAuth took 282.256583ms
	I0109 00:09:07.317462  451984 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:07.317651  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:07.317743  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.320246  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320529  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.320557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320724  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.320930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321068  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321199  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.321480  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.321807  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.321831  451984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:07.649960  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:07.649991  451984 machine.go:91] provisioned docker machine in 894.285072ms
	I0109 00:09:07.650005  451984 start.go:300] post-start starting for "embed-certs-845373" (driver="kvm2")
	I0109 00:09:07.650020  451984 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:07.650052  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.650505  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:07.650537  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.653343  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653671  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.653695  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653913  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.654147  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.654345  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.654548  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.745211  451984 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:07.749547  451984 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:07.749608  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:07.749694  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:07.749790  451984 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:07.749906  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:07.758232  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:07.781504  451984 start.go:303] post-start completed in 131.476813ms
	I0109 00:09:07.781532  451984 fix.go:56] fixHost completed within 19.333293059s
	I0109 00:09:07.781556  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.784365  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.784751  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.784774  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.785021  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.785267  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785430  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785570  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.785745  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.786073  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.786085  451984 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:07.912423  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758947.859859847
	
	I0109 00:09:07.912452  451984 fix.go:206] guest clock: 1704758947.859859847
	I0109 00:09:07.912462  451984 fix.go:219] Guest: 2024-01-09 00:09:07.859859847 +0000 UTC Remote: 2024-01-09 00:09:07.781536446 +0000 UTC m=+295.641408793 (delta=78.323401ms)
	I0109 00:09:07.912487  451984 fix.go:190] guest clock delta is within tolerance: 78.323401ms
	I0109 00:09:07.912494  451984 start.go:83] releasing machines lock for "embed-certs-845373", held for 19.464424699s
	I0109 00:09:07.912529  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.912827  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.915749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916146  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.916177  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916358  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.916865  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917042  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917155  451984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:07.917208  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.917263  451984 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:07.917288  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.920121  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920158  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920573  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920608  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920626  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920648  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920703  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920858  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920942  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921034  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921122  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921185  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921263  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.921282  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:08.040953  451984 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:08.046882  451984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:08.204801  451984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:08.214653  451984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:08.214741  451984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:08.232714  451984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:08.232750  451984 start.go:475] detecting cgroup driver to use...
	I0109 00:09:08.232881  451984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:08.254408  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:08.266926  451984 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:08.267015  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:08.278971  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:08.291982  451984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:08.395029  451984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:08.514444  451984 docker.go:219] disabling docker service ...
	I0109 00:09:08.514527  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:08.528548  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:08.540899  451984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:08.669118  451984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:08.776487  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:08.791617  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:08.809437  451984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:08.809509  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.818817  451984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:08.818891  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.828374  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.839820  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.849449  451984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:08.858899  451984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:08.869295  451984 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:08.869377  451984 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:08.885387  451984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:08.895106  451984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:09.007897  451984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:09.197656  451984 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:09.197737  451984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:09.203174  451984 start.go:543] Will wait 60s for crictl version
	I0109 00:09:09.203264  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:09:09.207312  451984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:09.245917  451984 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:09.245996  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.296410  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.345334  451984 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:07.937023  452237 main.go:141] libmachine: (no-preload-378213) Calling .Start
	I0109 00:09:07.937229  452237 main.go:141] libmachine: (no-preload-378213) Ensuring networks are active...
	I0109 00:09:07.938093  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network default is active
	I0109 00:09:07.938504  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network mk-no-preload-378213 is active
	I0109 00:09:07.938868  452237 main.go:141] libmachine: (no-preload-378213) Getting domain xml...
	I0109 00:09:07.939609  452237 main.go:141] libmachine: (no-preload-378213) Creating domain...
	I0109 00:09:09.254019  452237 main.go:141] libmachine: (no-preload-378213) Waiting to get IP...
	I0109 00:09:09.254967  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.255375  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.255465  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.255333  453115 retry.go:31] will retry after 260.636384ms: waiting for machine to come up
	I0109 00:09:09.518054  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.518563  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.518590  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.518522  453115 retry.go:31] will retry after 320.770806ms: waiting for machine to come up
	I0109 00:09:09.841203  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.841675  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.841710  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.841604  453115 retry.go:31] will retry after 317.226014ms: waiting for machine to come up
	I0109 00:09:10.160137  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.160545  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.160576  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.160522  453115 retry.go:31] will retry after 452.723717ms: waiting for machine to come up
	I0109 00:09:09.346886  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:09.350050  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350407  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:09.350440  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350626  451984 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:09.354884  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:09.367669  451984 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:09.367765  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:09.407793  451984 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:09.407876  451984 ssh_runner.go:195] Run: which lz4
	I0109 00:09:09.412172  451984 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:09.416303  451984 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:09.416331  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:11.408967  451984 crio.go:444] Took 1.996823 seconds to copy over tarball
	I0109 00:09:11.409067  451984 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:10.615452  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.615971  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.615999  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.615922  453115 retry.go:31] will retry after 555.714359ms: waiting for machine to come up
	I0109 00:09:11.173767  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:11.174269  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:11.174301  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:11.174220  453115 retry.go:31] will retry after 843.630815ms: waiting for machine to come up
	I0109 00:09:12.019354  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:12.019896  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:12.019962  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:12.019884  453115 retry.go:31] will retry after 1.083324701s: waiting for machine to come up
	I0109 00:09:13.104954  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:13.105499  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:13.105535  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:13.105442  453115 retry.go:31] will retry after 1.445208328s: waiting for machine to come up
	I0109 00:09:14.552723  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:14.553247  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:14.553278  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:14.553202  453115 retry.go:31] will retry after 1.207345182s: waiting for machine to come up
	I0109 00:09:14.301519  451984 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.892406004s)
	I0109 00:09:14.301567  451984 crio.go:451] Took 2.892564 seconds to extract the tarball
	I0109 00:09:14.301579  451984 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:09:14.344103  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:14.399048  451984 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:09:14.399072  451984 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:09:14.399160  451984 ssh_runner.go:195] Run: crio config
	I0109 00:09:14.459603  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:14.459643  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:14.459693  451984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:14.459752  451984 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-845373 NodeName:embed-certs-845373 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:14.460006  451984 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-845373"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:14.460098  451984 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-845373 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:14.460176  451984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:09:14.469269  451984 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:14.469363  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:14.479156  451984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0109 00:09:14.496058  451984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:09:14.513299  451984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:09:14.530721  451984 ssh_runner.go:195] Run: grep 192.168.50.132	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:14.534849  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:14.546999  451984 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373 for IP: 192.168.50.132
	I0109 00:09:14.547045  451984 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:14.547259  451984 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:14.547310  451984 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:14.547456  451984 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/client.key
	I0109 00:09:14.547531  451984 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key.073edd3d
	I0109 00:09:14.547584  451984 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key
	I0109 00:09:14.547733  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:14.547770  451984 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:14.547778  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:14.547803  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:14.547822  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:14.547851  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:14.547891  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:14.548888  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:14.574032  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:14.599543  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:14.625213  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:09:14.650001  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:14.675008  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:14.699179  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:14.722451  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:14.746559  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:14.769631  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:14.792906  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:14.815748  451984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:14.832389  451984 ssh_runner.go:195] Run: openssl version
	I0109 00:09:14.840602  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:14.856001  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862098  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862187  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.868184  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:14.879131  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:14.890092  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894911  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894969  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.900490  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:14.912056  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:14.923126  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.927937  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.928024  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.933646  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:14.944658  451984 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:14.949507  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:14.956040  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:14.962180  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:14.968224  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:14.974087  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:14.980079  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:14.986029  451984 kubeadm.go:404] StartCluster: {Name:embed-certs-845373 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:14.986148  451984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:14.986202  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:15.027950  451984 cri.go:89] found id: ""
	I0109 00:09:15.028035  451984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:15.039282  451984 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:15.039314  451984 kubeadm.go:636] restartCluster start
	I0109 00:09:15.039430  451984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:15.049695  451984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.050930  451984 kubeconfig.go:92] found "embed-certs-845373" server: "https://192.168.50.132:8443"
	I0109 00:09:15.053805  451984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:15.064953  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.065018  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.078921  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.565496  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.565626  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.578601  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.065133  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.065227  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.077749  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.565317  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.565425  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.578351  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:17.065861  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.065998  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.078781  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.762565  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:15.762982  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:15.763010  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:15.762909  453115 retry.go:31] will retry after 2.319709932s: waiting for machine to come up
	I0109 00:09:18.083780  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:18.084295  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:18.084330  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:18.084224  453115 retry.go:31] will retry after 2.101421106s: waiting for machine to come up
	I0109 00:09:20.188389  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:20.188770  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:20.188804  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:20.188712  453115 retry.go:31] will retry after 2.578747646s: waiting for machine to come up
	I0109 00:09:17.565567  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.565690  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.578496  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.065006  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.065120  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.078249  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.565568  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.565732  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.582691  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.065249  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.065353  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.082433  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.564998  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.565129  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.582026  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.065462  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.065563  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.079586  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.565150  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.565253  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.581576  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.065135  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.065246  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.080231  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.565856  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.566034  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.582980  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.065130  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.065245  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.078868  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.769370  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:22.769835  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:22.769877  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:22.769775  453115 retry.go:31] will retry after 4.446013118s: waiting for machine to come up
	I0109 00:09:22.565774  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.565850  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.581869  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.065381  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.065511  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.078260  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.565069  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.565171  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.577588  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.065102  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.065184  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.077356  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.565990  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.566090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.578416  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.065960  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:25.066090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:25.078618  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.078652  451984 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:25.078665  451984 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:25.078689  451984 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:25.078759  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:25.117213  451984 cri.go:89] found id: ""
	I0109 00:09:25.117304  451984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:25.133313  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:25.142683  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:25.142755  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152228  451984 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152252  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:25.273216  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.323239  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.049977221s)
	I0109 00:09:26.323274  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.531333  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.605976  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.691914  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:26.692006  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.408538  452488 start.go:369] acquired machines lock for "default-k8s-diff-port-834116" in 4m0.587839533s
	I0109 00:09:28.408614  452488 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:28.408627  452488 fix.go:54] fixHost starting: 
	I0109 00:09:28.409094  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:28.409147  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:28.426990  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0109 00:09:28.427467  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:28.428010  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:09:28.428043  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:28.428413  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:28.428726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:28.428887  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:09:28.430477  452488 fix.go:102] recreateIfNeeded on default-k8s-diff-port-834116: state=Stopped err=<nil>
	I0109 00:09:28.430508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	W0109 00:09:28.430658  452488 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:28.432612  452488 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-834116" ...
	I0109 00:09:27.220872  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221372  452237 main.go:141] libmachine: (no-preload-378213) Found IP for machine: 192.168.61.62
	I0109 00:09:27.221401  452237 main.go:141] libmachine: (no-preload-378213) Reserving static IP address...
	I0109 00:09:27.221416  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has current primary IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221769  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.221820  452237 main.go:141] libmachine: (no-preload-378213) DBG | skip adding static IP to network mk-no-preload-378213 - found existing host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"}
	I0109 00:09:27.221842  452237 main.go:141] libmachine: (no-preload-378213) Reserved static IP address: 192.168.61.62
	I0109 00:09:27.221859  452237 main.go:141] libmachine: (no-preload-378213) Waiting for SSH to be available...
	I0109 00:09:27.221877  452237 main.go:141] libmachine: (no-preload-378213) DBG | Getting to WaitForSSH function...
	I0109 00:09:27.224260  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224609  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.224643  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224762  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH client type: external
	I0109 00:09:27.224792  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa (-rw-------)
	I0109 00:09:27.224822  452237 main.go:141] libmachine: (no-preload-378213) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:27.224832  452237 main.go:141] libmachine: (no-preload-378213) DBG | About to run SSH command:
	I0109 00:09:27.224841  452237 main.go:141] libmachine: (no-preload-378213) DBG | exit 0
	I0109 00:09:27.315335  452237 main.go:141] libmachine: (no-preload-378213) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:27.315823  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetConfigRaw
	I0109 00:09:27.316473  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.319014  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319305  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.319339  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319673  452237 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/config.json ...
	I0109 00:09:27.319916  452237 machine.go:88] provisioning docker machine ...
	I0109 00:09:27.319939  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:27.320167  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320354  452237 buildroot.go:166] provisioning hostname "no-preload-378213"
	I0109 00:09:27.320378  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320575  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.322760  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323156  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.323187  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323317  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.323542  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323711  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323869  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.324061  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.324556  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.324577  452237 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-378213 && echo "no-preload-378213" | sudo tee /etc/hostname
	I0109 00:09:27.452901  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-378213
	
	I0109 00:09:27.452957  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.456295  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456636  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.456693  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456919  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.457140  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457343  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457491  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.457671  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.458159  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.458188  452237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-378213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-378213/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-378213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:27.579589  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:27.579626  452237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:27.579658  452237 buildroot.go:174] setting up certificates
	I0109 00:09:27.579674  452237 provision.go:83] configureAuth start
	I0109 00:09:27.579688  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.580039  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.583100  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583557  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.583592  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583759  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.586482  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.586816  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.586862  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.587019  452237 provision.go:138] copyHostCerts
	I0109 00:09:27.587091  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:27.587105  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:27.587162  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:27.587246  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:27.587256  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:27.587276  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:27.587326  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:27.587333  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:27.587350  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:27.587423  452237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.no-preload-378213 san=[192.168.61.62 192.168.61.62 localhost 127.0.0.1 minikube no-preload-378213]
	I0109 00:09:27.642093  452237 provision.go:172] copyRemoteCerts
	I0109 00:09:27.642159  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:27.642186  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.645245  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645702  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.645736  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645959  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.646180  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.646367  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.646552  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:27.740878  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:09:27.770934  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:27.794548  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:27.819155  452237 provision.go:86] duration metric: configureAuth took 239.463059ms
	I0109 00:09:27.819191  452237 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:27.819452  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:09:27.819556  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.822793  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823249  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.823282  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.823666  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823812  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823943  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.824098  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.824547  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.824575  452237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:28.155878  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:28.155939  452237 machine.go:91] provisioned docker machine in 835.996764ms
	I0109 00:09:28.155955  452237 start.go:300] post-start starting for "no-preload-378213" (driver="kvm2")
	I0109 00:09:28.155975  452237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:28.156002  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.156370  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:28.156408  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.159411  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.159831  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.159863  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.160134  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.160347  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.160553  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.160700  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.249092  452237 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:28.253686  452237 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:28.253721  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:28.253812  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:28.253914  452237 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:28.254042  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:28.262550  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:28.286467  452237 start.go:303] post-start completed in 130.492214ms
	I0109 00:09:28.286497  452237 fix.go:56] fixHost completed within 20.373793038s
	I0109 00:09:28.286527  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.289569  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290022  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.290056  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290374  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.290619  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.290815  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.291040  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.291256  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:28.291770  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:28.291788  452237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:28.408354  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758968.353872845
	
	I0109 00:09:28.408384  452237 fix.go:206] guest clock: 1704758968.353872845
	I0109 00:09:28.408392  452237 fix.go:219] Guest: 2024-01-09 00:09:28.353872845 +0000 UTC Remote: 2024-01-09 00:09:28.286503221 +0000 UTC m=+283.122022206 (delta=67.369624ms)
	I0109 00:09:28.408411  452237 fix.go:190] guest clock delta is within tolerance: 67.369624ms
	I0109 00:09:28.408416  452237 start.go:83] releasing machines lock for "no-preload-378213", held for 20.495748993s
	I0109 00:09:28.408448  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.408745  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:28.411951  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412357  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.412395  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412550  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413258  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413495  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413588  452237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:28.413639  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.414067  452237 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:28.414125  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.416878  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417049  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417271  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417292  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417550  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417710  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.417720  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417771  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417896  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.417935  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.418108  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.418105  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.418226  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.533738  452237 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:28.541801  452237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:28.692517  452237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:28.700384  452237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:28.700455  452237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:28.720264  452237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:28.720300  452237 start.go:475] detecting cgroup driver to use...
	I0109 00:09:28.720376  452237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:28.739758  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:28.755682  452237 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:28.755754  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:28.772178  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:28.792261  452237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:28.908562  452237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:29.042390  452237 docker.go:219] disabling docker service ...
	I0109 00:09:29.042528  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:29.055964  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:29.071788  452237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:29.191963  452237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:29.322608  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:29.336149  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:29.357616  452237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:29.357765  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.372357  452237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:29.372436  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.393266  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.405729  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.417114  452237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:29.428259  452237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:29.440397  452237 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:29.440499  452237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:29.454482  452237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:29.467600  452237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:29.590644  452237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:29.786115  452237 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:29.786205  452237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:29.793049  452237 start.go:543] Will wait 60s for crictl version
	I0109 00:09:29.793129  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:29.798630  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:29.847758  452237 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:29.847850  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.905071  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.963992  452237 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:09:29.965790  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:29.969222  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969638  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:29.969687  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969930  452237 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:29.974709  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:29.989617  452237 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:09:29.989667  452237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:30.034776  452237 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:09:30.034804  452237 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.034911  452237 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0109 00:09:30.034925  452237 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.034948  452237 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.035060  452237 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.034904  452237 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.035172  452237 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036679  452237 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036727  452237 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.036737  452237 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.036788  452237 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0109 00:09:30.036814  452237 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.036730  452237 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.036846  452237 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.036678  452237 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.208127  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:27.192095  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:27.692608  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.192176  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.692194  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.192059  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.219995  451984 api_server.go:72] duration metric: took 2.528085009s to wait for apiserver process to appear ...
	I0109 00:09:29.220032  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:09:29.220058  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:28.434238  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Start
	I0109 00:09:28.434453  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring networks are active...
	I0109 00:09:28.435324  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network default is active
	I0109 00:09:28.435804  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network mk-default-k8s-diff-port-834116 is active
	I0109 00:09:28.436322  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Getting domain xml...
	I0109 00:09:28.437072  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Creating domain...
	I0109 00:09:29.958911  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting to get IP...
	I0109 00:09:29.959938  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960820  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960896  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:29.960822  453241 retry.go:31] will retry after 210.498897ms: waiting for machine to come up
	I0109 00:09:30.173307  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173717  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173752  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.173670  453241 retry.go:31] will retry after 342.664675ms: waiting for machine to come up
	I0109 00:09:30.518442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519113  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.519069  453241 retry.go:31] will retry after 411.240969ms: waiting for machine to come up
	I0109 00:09:30.931762  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932152  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.932104  453241 retry.go:31] will retry after 402.965268ms: waiting for machine to come up
	I0109 00:09:31.336957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337426  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337459  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.337393  453241 retry.go:31] will retry after 626.321347ms: waiting for machine to come up
	I0109 00:09:31.965071  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965632  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965665  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.965592  453241 retry.go:31] will retry after 787.166947ms: waiting for machine to come up
	I0109 00:09:30.217603  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0109 00:09:30.234877  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.243097  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.258262  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.273678  452237 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0109 00:09:30.273761  452237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.273826  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.278909  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.285277  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.289552  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.430758  452237 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0109 00:09:30.430813  452237 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.430866  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.430995  452237 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0109 00:09:30.431023  452237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.431061  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456561  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.456591  452237 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0109 00:09:30.456636  452237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.456690  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456722  452237 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0109 00:09:30.456757  452237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.456791  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456911  452237 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0109 00:09:30.456945  452237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.456976  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.482028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.482298  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.482547  452237 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0109 00:09:30.482694  452237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.482754  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.518754  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.518899  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.518966  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.519317  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0109 00:09:30.519422  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.629846  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630082  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630145  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630189  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630022  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0109 00:09:30.630280  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:30.630028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.657819  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657907  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0109 00:09:30.657966  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657824  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0109 00:09:30.658025  452237 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658053  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:30.658084  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0109 00:09:30.658091  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658142  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0109 00:09:30.658173  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0109 00:09:30.714523  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:30.714654  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:32.867027  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.208889866s)
	I0109 00:09:32.867091  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0109 00:09:32.867107  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.209103985s)
	I0109 00:09:32.867122  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:32.867141  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867187  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.209109716s)
	I0109 00:09:32.867221  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0109 00:09:32.867220  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.15254199s)
	I0109 00:09:32.867251  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867190  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:35.150432  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.283143174s)
	I0109 00:09:35.150478  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0109 00:09:35.150509  452237 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:35.150560  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:34.179483  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.179518  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.179533  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.210742  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.210780  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.220940  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.259813  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.259869  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:34.720337  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.733062  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.733105  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.220599  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.228775  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:35.228814  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.720241  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.725882  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:09:35.736706  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:09:35.736745  451984 api_server.go:131] duration metric: took 6.516702561s to wait for apiserver health ...
	I0109 00:09:35.736790  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:35.736811  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:35.739014  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:09:35.740624  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:09:35.776055  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:09:35.814280  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:09:35.832281  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:09:35.832330  451984 system_pods.go:61] "coredns-5dd5756b68-vkd62" [c676d069-cca7-428c-8eec-026ecea14be2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:09:35.832342  451984 system_pods.go:61] "etcd-embed-certs-845373" [92d4616d-126c-4ee9-9475-9d0c790090c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:09:35.832354  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [9663f585-eca1-4f8f-8a93-aea9b4e98c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:09:35.832368  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [41b4ce59-d838-4798-b593-93c7c8573733] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:09:35.832383  451984 system_pods.go:61] "kube-proxy-tbzpb" [132469d5-d267-4869-ad09-c9fba8d0f9d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:09:35.832398  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [336147ec-8318-496b-986d-55845e7dd9a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:09:35.832408  451984 system_pods.go:61] "metrics-server-57f55c9bc5-2p4js" [c37e24f3-c50b-4169-9d0b-48e21072a114] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:09:35.832421  451984 system_pods.go:61] "storage-provisioner" [e558d9f2-6d92-41d6-82bf-194f53ead52c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:09:35.832436  451984 system_pods.go:74] duration metric: took 18.123808ms to wait for pod list to return data ...
	I0109 00:09:35.832451  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:09:35.836031  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:09:35.836180  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:09:35.836225  451984 node_conditions.go:105] duration metric: took 3.766883ms to run NodePressure ...
	I0109 00:09:35.836250  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:36.192967  451984 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198294  451984 kubeadm.go:787] kubelet initialised
	I0109 00:09:36.198327  451984 kubeadm.go:788] duration metric: took 5.32566ms waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198373  451984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:09:36.205198  451984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:36.230481  451984 pod_ready.go:97] node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230560  451984 pod_ready.go:81] duration metric: took 25.328027ms waiting for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	E0109 00:09:36.230576  451984 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230600  451984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:32.754128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779281  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779328  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:32.754425  453241 retry.go:31] will retry after 781.872506ms: waiting for machine to come up
	I0109 00:09:33.538136  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538643  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:33.538562  453241 retry.go:31] will retry after 1.315575893s: waiting for machine to come up
	I0109 00:09:34.856083  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857209  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857287  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:34.857007  453241 retry.go:31] will retry after 1.252692701s: waiting for machine to come up
	I0109 00:09:36.111647  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112092  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112127  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:36.112042  453241 retry.go:31] will retry after 1.549931798s: waiting for machine to come up
	I0109 00:09:37.664325  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664771  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664841  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:37.664729  453241 retry.go:31] will retry after 2.220936863s: waiting for machine to come up
	I0109 00:09:39.585741  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.435146297s)
	I0109 00:09:39.585853  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0109 00:09:39.585890  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:39.585954  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:38.239319  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:40.240459  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:39.886897  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887409  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887446  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:39.887322  453241 retry.go:31] will retry after 3.125817684s: waiting for machine to come up
	I0109 00:09:42.688186  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.102196226s)
	I0109 00:09:42.688238  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0109 00:09:42.688270  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:42.688333  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:44.144243  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.455874893s)
	I0109 00:09:44.144277  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0109 00:09:44.144322  452237 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:44.144396  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:45.193429  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.048998334s)
	I0109 00:09:45.193464  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0109 00:09:45.193501  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:45.193553  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:42.241597  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:44.740359  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:46.239061  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.239098  451984 pod_ready.go:81] duration metric: took 10.008483597s waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.239112  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244571  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.244598  451984 pod_ready.go:81] duration metric: took 5.476365ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244610  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249839  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.249866  451984 pod_ready.go:81] duration metric: took 5.248385ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249891  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254718  451984 pod_ready.go:92] pod "kube-proxy-tbzpb" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.254742  451984 pod_ready.go:81] duration metric: took 4.843779ms waiting for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254752  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:43.016904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017479  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:43.017386  453241 retry.go:31] will retry after 3.976875386s: waiting for machine to come up
	I0109 00:09:46.996452  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996902  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996937  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:46.996855  453241 retry.go:31] will retry after 5.149738116s: waiting for machine to come up
	I0109 00:09:47.750708  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.557124662s)
	I0109 00:09:47.750737  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0109 00:09:47.750767  452237 cache_images.go:123] Successfully loaded all cached images
	I0109 00:09:47.750773  452237 cache_images.go:92] LoadImages completed in 17.715956149s
	I0109 00:09:47.750871  452237 ssh_runner.go:195] Run: crio config
	I0109 00:09:47.811486  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:09:47.811510  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:47.811535  452237 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:47.811560  452237 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.62 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-378213 NodeName:no-preload-378213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:47.811757  452237 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-378213"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:47.811881  452237 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-378213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:47.811954  452237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:09:47.821353  452237 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:47.821426  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:47.830117  452237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0109 00:09:47.847966  452237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:09:47.865130  452237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0109 00:09:47.881920  452237 ssh_runner.go:195] Run: grep 192.168.61.62	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:47.885907  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:47.899472  452237 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213 for IP: 192.168.61.62
	I0109 00:09:47.899519  452237 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:47.899687  452237 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:47.899729  452237 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:47.899792  452237 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/client.key
	I0109 00:09:47.899854  452237 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key.fe752756
	I0109 00:09:47.899891  452237 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key
	I0109 00:09:47.899991  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:47.900022  452237 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:47.900033  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:47.900056  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:47.900084  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:47.900111  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:47.900176  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:47.900831  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:47.926702  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:47.952472  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:47.977143  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:09:48.001909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:48.028506  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:48.054909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:48.079320  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:48.106719  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:48.133440  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:48.157353  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:48.180860  452237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:48.198490  452237 ssh_runner.go:195] Run: openssl version
	I0109 00:09:48.204240  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:48.214015  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218654  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218717  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.224372  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:48.233922  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:48.243425  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248305  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248381  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.254018  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:48.263791  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:48.273568  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278373  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278438  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.284003  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:48.296358  452237 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:48.301336  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:48.307645  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:48.313470  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:48.319349  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:48.325344  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:48.331352  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:48.337159  452237 kubeadm.go:404] StartCluster: {Name:no-preload-378213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:48.337255  452237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:48.337302  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:48.374150  452237 cri.go:89] found id: ""
	I0109 00:09:48.374229  452237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:48.383627  452237 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:48.383649  452237 kubeadm.go:636] restartCluster start
	I0109 00:09:48.383699  452237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:48.392428  452237 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.393515  452237 kubeconfig.go:92] found "no-preload-378213" server: "https://192.168.61.62:8443"
	I0109 00:09:48.395997  452237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:48.404639  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.404708  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.416205  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.904794  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.904896  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.916391  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.404903  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.405006  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.416469  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.905053  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.905224  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.916621  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.262991  451984 pod_ready.go:102] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:50.262235  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:50.262262  451984 pod_ready.go:81] duration metric: took 4.007503301s waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:50.262275  451984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:52.150891  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151383  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Found IP for machine: 192.168.39.73
	I0109 00:09:52.151416  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserving static IP address...
	I0109 00:09:52.151442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has current primary IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.151943  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | skip adding static IP to network mk-default-k8s-diff-port-834116 - found existing host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"}
	I0109 00:09:52.151966  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserved static IP address: 192.168.39.73
	I0109 00:09:52.152005  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for SSH to be available...
	I0109 00:09:52.152039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Getting to WaitForSSH function...
	I0109 00:09:52.154139  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154471  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.154514  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154642  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH client type: external
	I0109 00:09:52.154672  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa (-rw-------)
	I0109 00:09:52.154701  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:52.154719  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | About to run SSH command:
	I0109 00:09:52.154736  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | exit 0
	I0109 00:09:52.247320  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:52.247704  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetConfigRaw
	I0109 00:09:52.248366  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.251047  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251482  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.251511  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251734  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:09:52.251981  452488 machine.go:88] provisioning docker machine ...
	I0109 00:09:52.252003  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:52.252219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252396  452488 buildroot.go:166] provisioning hostname "default-k8s-diff-port-834116"
	I0109 00:09:52.252418  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252612  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.254861  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255244  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.255276  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255485  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.255657  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255956  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.256111  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.256468  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.256485  452488 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-834116 && echo "default-k8s-diff-port-834116" | sudo tee /etc/hostname
	I0109 00:09:52.392092  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-834116
	
	I0109 00:09:52.392128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.394807  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395260  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.395312  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395539  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.395797  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396091  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396289  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.396464  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.396839  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.396863  452488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-834116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-834116/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-834116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:52.527950  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:52.527981  452488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:52.528006  452488 buildroot.go:174] setting up certificates
	I0109 00:09:52.528021  452488 provision.go:83] configureAuth start
	I0109 00:09:52.528033  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.528365  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.531179  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531597  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.531624  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531763  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.534073  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534480  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.534521  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534650  452488 provision.go:138] copyHostCerts
	I0109 00:09:52.534726  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:52.534737  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:52.534796  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:52.534902  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:52.534912  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:52.534933  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:52.535020  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:52.535027  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:52.535042  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:52.535093  452488 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-834116 san=[192.168.39.73 192.168.39.73 localhost 127.0.0.1 minikube default-k8s-diff-port-834116]
	I0109 00:09:53.636158  451943 start.go:369] acquired machines lock for "old-k8s-version-003293" in 1m0.185697203s
	I0109 00:09:53.636214  451943 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:53.636222  451943 fix.go:54] fixHost starting: 
	I0109 00:09:53.636646  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:53.636682  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:53.654194  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0109 00:09:53.654606  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:53.655203  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:09:53.655227  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:53.655659  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:53.655927  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:09:53.656139  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:09:53.657909  451943 fix.go:102] recreateIfNeeded on old-k8s-version-003293: state=Stopped err=<nil>
	I0109 00:09:53.657934  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	W0109 00:09:53.658135  451943 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:53.660261  451943 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003293" ...
	I0109 00:09:52.872029  452488 provision.go:172] copyRemoteCerts
	I0109 00:09:52.872106  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:52.872134  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.874824  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875218  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.875256  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.875726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.875959  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.876122  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:52.970940  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:52.995353  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0109 00:09:53.019846  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:53.048132  452488 provision.go:86] duration metric: configureAuth took 520.096734ms
	I0109 00:09:53.048166  452488 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:53.048357  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:53.048458  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.051336  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051745  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.051781  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051963  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.052200  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052578  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.052753  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.053273  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.053296  452488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:53.371482  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:53.371519  452488 machine.go:91] provisioned docker machine in 1.119521349s
	I0109 00:09:53.371534  452488 start.go:300] post-start starting for "default-k8s-diff-port-834116" (driver="kvm2")
	I0109 00:09:53.371572  452488 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:53.371601  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.371940  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:53.371968  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.374606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.374999  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.375039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.375242  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.375487  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.375668  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.375823  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.469684  452488 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:53.474184  452488 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:53.474226  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:53.474291  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:53.474375  452488 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:53.474510  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:53.484106  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:53.508477  452488 start.go:303] post-start completed in 136.921252ms
	I0109 00:09:53.508516  452488 fix.go:56] fixHost completed within 25.099889324s
	I0109 00:09:53.508540  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.511508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.511954  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.511993  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.512174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.512412  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512605  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512739  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.512966  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.513304  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.513319  452488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:53.635969  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758993.581588382
	
	I0109 00:09:53.635992  452488 fix.go:206] guest clock: 1704758993.581588382
	I0109 00:09:53.636001  452488 fix.go:219] Guest: 2024-01-09 00:09:53.581588382 +0000 UTC Remote: 2024-01-09 00:09:53.508520878 +0000 UTC m=+265.847432935 (delta=73.067504ms)
	I0109 00:09:53.636037  452488 fix.go:190] guest clock delta is within tolerance: 73.067504ms
	I0109 00:09:53.636042  452488 start.go:83] releasing machines lock for "default-k8s-diff-port-834116", held for 25.227459425s
	I0109 00:09:53.636078  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.636408  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:53.639469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.639957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.639990  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.640149  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640967  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.641079  452488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:53.641126  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.641236  452488 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:53.641263  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.643872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644145  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644230  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644258  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644427  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644519  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644552  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644618  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644698  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644784  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.644850  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644945  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.645012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.645188  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.758973  452488 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:53.765494  452488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:53.913457  452488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:53.921317  452488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:53.921409  452488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:53.937393  452488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:53.937422  452488 start.go:475] detecting cgroup driver to use...
	I0109 00:09:53.937501  452488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:53.954986  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:53.967577  452488 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:53.967661  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:53.981370  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:53.994954  452488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:54.113662  452488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:54.257917  452488 docker.go:219] disabling docker service ...
	I0109 00:09:54.258009  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:54.275330  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:54.287545  452488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:54.413696  452488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:54.534759  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:54.548789  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:54.567131  452488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:54.567209  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.578605  452488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:54.578690  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.588764  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.598290  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.608187  452488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:54.619339  452488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:54.627744  452488 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:54.627810  452488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:54.640572  452488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:54.649169  452488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:54.774028  452488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:54.981035  452488 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:54.981123  452488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:54.986812  452488 start.go:543] Will wait 60s for crictl version
	I0109 00:09:54.986874  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:09:54.991067  452488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:55.026881  452488 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:55.026988  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.084315  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.135003  452488 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:50.405359  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.405454  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.417541  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:50.904703  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.904809  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.916106  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.404732  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.404823  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.418697  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.905352  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.905439  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.917655  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.404773  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.404858  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.417345  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.905434  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.905529  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.916604  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.404704  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.404820  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.416990  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.905624  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.905727  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.918455  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.404944  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.405034  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.419015  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.905601  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.905738  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.921252  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.661730  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Start
	I0109 00:09:53.661977  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring networks are active...
	I0109 00:09:53.662718  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network default is active
	I0109 00:09:53.663173  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network mk-old-k8s-version-003293 is active
	I0109 00:09:53.663701  451943 main.go:141] libmachine: (old-k8s-version-003293) Getting domain xml...
	I0109 00:09:53.664456  451943 main.go:141] libmachine: (old-k8s-version-003293) Creating domain...
	I0109 00:09:55.030325  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting to get IP...
	I0109 00:09:55.031241  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.031720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.031800  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.031693  453422 retry.go:31] will retry after 209.915867ms: waiting for machine to come up
	I0109 00:09:55.243218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.243740  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.243792  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.243678  453422 retry.go:31] will retry after 309.964884ms: waiting for machine to come up
	I0109 00:09:55.555468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.556044  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.556075  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.555982  453422 retry.go:31] will retry after 306.870224ms: waiting for machine to come up
	I0109 00:09:55.864558  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.865161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.865199  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.865113  453422 retry.go:31] will retry after 475.599739ms: waiting for machine to come up
	I0109 00:09:52.270751  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:54.271341  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:56.775574  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:55.136380  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:55.139749  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140142  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:55.140174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140387  452488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:55.145715  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:55.159881  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:55.159972  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:55.209715  452488 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:55.209814  452488 ssh_runner.go:195] Run: which lz4
	I0109 00:09:55.214766  452488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:55.219645  452488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:55.219683  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:57.101116  452488 crio.go:444] Took 1.886420 seconds to copy over tarball
	I0109 00:09:57.101207  452488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:55.405633  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.405734  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.420242  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:55.905578  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.905685  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.923018  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.405516  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.405602  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.420028  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.905320  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.905409  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.940464  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.404810  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.404925  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.420965  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.905566  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.905684  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.920601  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:58.404728  452237 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:58.404779  452237 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:58.404821  452237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:58.404906  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:58.450415  452237 cri.go:89] found id: ""
	I0109 00:09:58.450510  452237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:58.469938  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:58.481877  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:58.481963  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494336  452237 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494367  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:58.644325  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.472346  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.715956  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.857573  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.962996  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:59.963097  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:56.342815  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.343422  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.343456  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.343365  453422 retry.go:31] will retry after 512.8445ms: waiting for machine to come up
	I0109 00:09:56.858161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.858689  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.858720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.858631  453422 retry.go:31] will retry after 649.65221ms: waiting for machine to come up
	I0109 00:09:57.509509  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:57.510080  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:57.510121  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:57.510023  453422 retry.go:31] will retry after 1.153518379s: waiting for machine to come up
	I0109 00:09:58.665328  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:58.665946  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:58.665986  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:58.665886  453422 retry.go:31] will retry after 1.392576392s: waiting for machine to come up
	I0109 00:10:00.060701  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:00.061368  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:00.061416  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:00.061263  453422 retry.go:31] will retry after 1.185250663s: waiting for machine to come up
	I0109 00:09:59.270305  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:01.271958  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:00.887146  452488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.785897124s)
	I0109 00:10:00.887183  452488 crio.go:451] Took 3.786033 seconds to extract the tarball
	I0109 00:10:00.887196  452488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:00.940322  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:01.087742  452488 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:10:01.087778  452488 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:10:01.087861  452488 ssh_runner.go:195] Run: crio config
	I0109 00:10:01.154384  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:01.154411  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:01.154432  452488 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:01.154460  452488 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-834116 NodeName:default-k8s-diff-port-834116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:10:01.154664  452488 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-834116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:01.154768  452488 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-834116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0109 00:10:01.154837  452488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:10:01.165075  452488 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:01.165167  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:01.175380  452488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0109 00:10:01.198018  452488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:01.216515  452488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0109 00:10:01.238477  452488 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:01.242706  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:01.256799  452488 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116 for IP: 192.168.39.73
	I0109 00:10:01.256833  452488 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:01.257009  452488 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:01.257084  452488 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:01.257180  452488 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/client.key
	I0109 00:10:01.257272  452488 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key.8b49dc8b
	I0109 00:10:01.257330  452488 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key
	I0109 00:10:01.257473  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:01.257512  452488 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:01.257529  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:01.257582  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:01.257632  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:01.257674  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:01.257737  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:01.258699  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:01.288498  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:10:01.315010  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:01.342657  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:10:01.368423  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:01.394295  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:01.423461  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:01.452044  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:01.478834  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:01.505029  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:01.531765  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:01.557126  452488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:01.575037  452488 ssh_runner.go:195] Run: openssl version
	I0109 00:10:01.580971  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:01.592882  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598205  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598285  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.604293  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:01.615508  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:01.625979  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631195  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631268  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.637322  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:01.649611  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:01.661754  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667033  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667114  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.673312  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:01.687649  452488 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:01.694523  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:01.701260  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:01.709371  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:01.717249  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:01.724104  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:01.730706  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:01.738716  452488 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:01.738846  452488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:01.738935  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:01.789522  452488 cri.go:89] found id: ""
	I0109 00:10:01.789639  452488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:01.802440  452488 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:01.802470  452488 kubeadm.go:636] restartCluster start
	I0109 00:10:01.802530  452488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:01.814839  452488 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:01.816303  452488 kubeconfig.go:92] found "default-k8s-diff-port-834116" server: "https://192.168.39.73:8444"
	I0109 00:10:01.818978  452488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:01.829115  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:01.829200  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:01.841947  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:02.329489  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.329629  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.346716  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:00.463974  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:00.963295  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.463906  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.963508  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.463259  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.964275  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.464037  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.963542  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.998344  452237 api_server.go:72] duration metric: took 4.035357514s to wait for apiserver process to appear ...
	I0109 00:10:03.998383  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:03.998415  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:03.999025  452237 api_server.go:269] stopped: https://192.168.61.62:8443/healthz: Get "https://192.168.61.62:8443/healthz": dial tcp 192.168.61.62:8443: connect: connection refused
	I0109 00:10:04.498619  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:01.248726  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:01.249297  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:01.249334  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:01.249190  453422 retry.go:31] will retry after 2.101995832s: waiting for machine to come up
	I0109 00:10:03.353250  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:03.353837  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:03.353870  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:03.353803  453422 retry.go:31] will retry after 2.338357499s: waiting for machine to come up
	I0109 00:10:05.694257  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:05.694773  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:05.694805  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:05.694753  453422 retry.go:31] will retry after 2.962877462s: waiting for machine to come up
	I0109 00:10:03.772407  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:05.776569  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:02.829349  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.829477  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.845294  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.329917  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.330034  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.345877  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.829787  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.829908  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.845499  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.329869  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.329968  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.345228  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.829841  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.829964  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.841831  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.329392  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.329534  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.344928  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.829388  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.829490  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.845517  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.329745  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.329846  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.344692  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.829201  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.829339  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.844107  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.329562  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.329679  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.341888  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.617974  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.618015  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.618037  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:07.676283  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.676318  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.999237  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.036271  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.036307  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.498881  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.504457  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.504490  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.998535  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:09.009194  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:10:09.017267  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:10:09.017300  452237 api_server.go:131] duration metric: took 5.018909056s to wait for apiserver health ...
	I0109 00:10:09.017311  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:10:09.017319  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:09.019322  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:09.020666  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:09.030282  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:09.049477  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:09.063218  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:09.063264  452237 system_pods.go:61] "coredns-76f75df574-kw4v7" [6a2a3896-7b4c-4912-9e6a-0033564d211b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:09.063277  452237 system_pods.go:61] "etcd-no-preload-378213" [b650412b-fa3a-4490-9b43-caf6ac1cb8b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:09.063294  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [b372f056-7243-416e-905f-ba80a332005a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:09.063307  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8b32fab5-ef2b-4145-8cf8-8ec616a73798] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:09.063317  452237 system_pods.go:61] "kube-proxy-kxjqj" [40d27586-c2e4-407e-ac43-c0dbd851427e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:09.063325  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [2a609b1f-ce89-4e95-b56c-c84702352967] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:09.063343  452237 system_pods.go:61] "metrics-server-57f55c9bc5-th24j" [9f47b0d1-1399-4349-8f99-d85598461c68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:09.063383  452237 system_pods.go:61] "storage-provisioner" [f12f48e3-4e11-47e4-b785-ca9b47cbc0a4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:09.063396  452237 system_pods.go:74] duration metric: took 13.893709ms to wait for pod list to return data ...
	I0109 00:10:09.063407  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:09.067414  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:09.067457  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:09.067474  452237 node_conditions.go:105] duration metric: took 4.056143ms to run NodePressure ...
	I0109 00:10:09.067507  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:09.383666  452237 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389727  452237 kubeadm.go:787] kubelet initialised
	I0109 00:10:09.389749  452237 kubeadm.go:788] duration metric: took 6.05357ms waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389758  452237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:09.397162  452237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:08.658880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:08.659431  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:08.659468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:08.659353  453422 retry.go:31] will retry after 4.088487909s: waiting for machine to come up
	I0109 00:10:08.271546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:10.273183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:07.830081  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.830237  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.846118  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.329537  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.329642  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.345267  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.829229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.829351  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.845147  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.329244  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.329371  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.343552  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.829910  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.829999  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.841589  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.330229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.330316  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.346027  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.830077  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.830193  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.842301  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.329908  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.330029  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.341398  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.829904  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.830007  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.841281  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.841317  452488 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:11.841340  452488 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:11.841350  452488 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:11.841406  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:11.880872  452488 cri.go:89] found id: ""
	I0109 00:10:11.880993  452488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:11.896522  452488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:11.905372  452488 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:11.905452  452488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915053  452488 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915083  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:12.053489  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:11.406042  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:13.406387  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.752603  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753243  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has current primary IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753276  451943 main.go:141] libmachine: (old-k8s-version-003293) Found IP for machine: 192.168.72.81
	I0109 00:10:12.753290  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserving static IP address...
	I0109 00:10:12.753738  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.753770  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | skip adding static IP to network mk-old-k8s-version-003293 - found existing host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"}
	I0109 00:10:12.753790  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserved static IP address: 192.168.72.81
	I0109 00:10:12.753812  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting for SSH to be available...
	I0109 00:10:12.753829  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Getting to WaitForSSH function...
	I0109 00:10:12.756348  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756765  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.756798  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756931  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH client type: external
	I0109 00:10:12.756959  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa (-rw-------)
	I0109 00:10:12.756995  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:10:12.757008  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | About to run SSH command:
	I0109 00:10:12.757025  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | exit 0
	I0109 00:10:12.908563  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | SSH cmd err, output: <nil>: 
	I0109 00:10:12.909330  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetConfigRaw
	I0109 00:10:12.910245  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:12.913338  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.913744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.913778  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.914153  451943 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/config.json ...
	I0109 00:10:12.914422  451943 machine.go:88] provisioning docker machine ...
	I0109 00:10:12.914451  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:12.914678  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.914869  451943 buildroot.go:166] provisioning hostname "old-k8s-version-003293"
	I0109 00:10:12.914895  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.915042  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:12.917551  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.917918  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.917949  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.918083  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:12.918284  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918477  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918637  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:12.918824  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:12.919390  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:12.919409  451943 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003293 && echo "old-k8s-version-003293" | sudo tee /etc/hostname
	I0109 00:10:13.077570  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003293
	
	I0109 00:10:13.077613  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.081190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081575  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.081599  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081874  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.082128  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082377  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082568  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.082783  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.083268  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.083293  451943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003293/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:10:13.235134  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:10:13.235167  451943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:10:13.235216  451943 buildroot.go:174] setting up certificates
	I0109 00:10:13.235236  451943 provision.go:83] configureAuth start
	I0109 00:10:13.235254  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:13.235632  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:13.239282  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.239867  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.239902  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.240253  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.243109  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243516  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.243546  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243730  451943 provision.go:138] copyHostCerts
	I0109 00:10:13.243811  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:10:13.243826  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:10:13.243917  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:10:13.244095  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:10:13.244109  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:10:13.244139  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:10:13.244233  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:10:13.244244  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:10:13.244271  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:10:13.244357  451943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003293 san=[192.168.72.81 192.168.72.81 localhost 127.0.0.1 minikube old-k8s-version-003293]
	I0109 00:10:13.358229  451943 provision.go:172] copyRemoteCerts
	I0109 00:10:13.358298  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:10:13.358329  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.361495  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.361925  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.361961  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.362229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.362512  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.362707  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.362901  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:13.464633  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:10:13.491908  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:10:13.520424  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:10:13.551287  451943 provision.go:86] duration metric: configureAuth took 316.030603ms
	I0109 00:10:13.551322  451943 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:10:13.551588  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:10:13.551689  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.554570  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.554888  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.554941  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.555088  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.555402  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555595  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555803  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.555991  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.556435  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.556461  451943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:10:13.929994  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:10:13.930040  451943 machine.go:91] provisioned docker machine in 1.015597473s
	I0109 00:10:13.930056  451943 start.go:300] post-start starting for "old-k8s-version-003293" (driver="kvm2")
	I0109 00:10:13.930076  451943 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:10:13.930107  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:13.930498  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:10:13.930537  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.933680  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934172  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.934218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934589  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.934794  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.935029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.935189  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.038045  451943 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:10:14.044182  451943 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:10:14.044220  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:10:14.044315  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:10:14.044455  451943 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:10:14.044602  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:10:14.056820  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:14.083704  451943 start.go:303] post-start completed in 153.628012ms
	I0109 00:10:14.083736  451943 fix.go:56] fixHost completed within 20.447514213s
	I0109 00:10:14.083765  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.087190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087732  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.087776  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087968  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.088229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088467  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088630  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.088863  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:14.089367  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:14.089389  451943 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:10:14.224545  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704759014.163550757
	
	I0109 00:10:14.224580  451943 fix.go:206] guest clock: 1704759014.163550757
	I0109 00:10:14.224591  451943 fix.go:219] Guest: 2024-01-09 00:10:14.163550757 +0000 UTC Remote: 2024-01-09 00:10:14.083740733 +0000 UTC m=+363.223126670 (delta=79.810024ms)
	I0109 00:10:14.224620  451943 fix.go:190] guest clock delta is within tolerance: 79.810024ms
	I0109 00:10:14.224627  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 20.588443227s
	I0109 00:10:14.224659  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.224961  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:14.228116  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228565  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.228645  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228870  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229553  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229781  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229882  451943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:10:14.229958  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.230034  451943 ssh_runner.go:195] Run: cat /version.json
	I0109 00:10:14.230062  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.233060  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233305  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233484  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233511  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233691  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.233903  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233926  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233959  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234064  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.234220  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234290  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234400  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.234418  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234557  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.328685  451943 ssh_runner.go:195] Run: systemctl --version
	I0109 00:10:14.359854  451943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:10:14.515121  451943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:10:14.525585  451943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:10:14.525668  451943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:10:14.549678  451943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:10:14.549719  451943 start.go:475] detecting cgroup driver to use...
	I0109 00:10:14.549804  451943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:10:14.569734  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:10:14.587820  451943 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:10:14.587921  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:10:14.601724  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:10:14.615402  451943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:10:14.732774  451943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:10:14.872480  451943 docker.go:219] disabling docker service ...
	I0109 00:10:14.872579  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:10:14.887044  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:10:14.904944  451943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:10:15.043833  451943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:10:15.162992  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:10:15.176677  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:10:15.197594  451943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0109 00:10:15.197674  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.207993  451943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:10:15.208071  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.218230  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.228291  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.238163  451943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:10:15.248394  451943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:10:15.257457  451943 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:10:15.257541  451943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:10:15.271604  451943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:10:15.282409  451943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:10:15.401506  451943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:10:15.586851  451943 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:10:15.586942  451943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:10:15.593734  451943 start.go:543] Will wait 60s for crictl version
	I0109 00:10:15.593798  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:15.598705  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:10:15.642640  451943 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:10:15.642751  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.714964  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.773793  451943 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0109 00:10:15.775287  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:15.778313  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.778769  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:15.778795  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.779046  451943 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:10:15.783496  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:15.795338  451943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0109 00:10:15.795427  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:15.844077  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:15.844162  451943 ssh_runner.go:195] Run: which lz4
	I0109 00:10:15.848502  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:10:15.852893  451943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:10:15.852949  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0109 00:10:12.274183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:14.770967  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:16.781482  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.786247  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.017442  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.128701  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.223775  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:13.223873  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:13.724895  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.224593  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.724375  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.224993  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.724059  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.747019  452488 api_server.go:72] duration metric: took 2.523230788s to wait for apiserver process to appear ...
	I0109 00:10:15.747056  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:15.747083  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.747711  452488 api_server.go:269] stopped: https://192.168.39.73:8444/healthz: Get "https://192.168.39.73:8444/healthz": dial tcp 192.168.39.73:8444: connect: connection refused
	I0109 00:10:16.247411  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.407079  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.407307  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:19.407533  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.632956  451943 crio.go:444] Took 1.784489 seconds to copy over tarball
	I0109 00:10:17.633087  451943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:10:19.999506  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:19.999551  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:19.999569  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.066949  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:20.066982  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:20.247460  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.256943  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.256985  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:20.747576  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.755833  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.755892  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:21.247473  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:21.255476  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:10:21.266074  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:10:21.266115  452488 api_server.go:131] duration metric: took 5.519049271s to wait for apiserver health ...
	I0109 00:10:21.266127  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:21.266136  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:21.401812  452488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:19.272981  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.770765  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.903126  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:21.921050  452488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:21.946628  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:21.959029  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:21.959077  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:21.959089  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:21.959100  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:21.959110  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:21.959125  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:21.959141  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:21.959149  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:21.959165  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:21.959178  452488 system_pods.go:74] duration metric: took 12.524667ms to wait for pod list to return data ...
	I0109 00:10:21.959198  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:21.963572  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:21.963614  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:21.963629  452488 node_conditions.go:105] duration metric: took 4.420685ms to run NodePressure ...
	I0109 00:10:21.963653  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:23.566660  452488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.602978271s)
	I0109 00:10:23.566704  452488 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573882  452488 kubeadm.go:787] kubelet initialised
	I0109 00:10:23.573911  452488 kubeadm.go:788] duration metric: took 7.19484ms waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573923  452488 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:23.590206  452488 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.603347  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603402  452488 pod_ready.go:81] duration metric: took 13.169776ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.603416  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603426  452488 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.614946  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.614986  452488 pod_ready.go:81] duration metric: took 11.548332ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.615003  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.615012  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.628345  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628378  452488 pod_ready.go:81] duration metric: took 13.353873ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.628389  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628396  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.635987  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636023  452488 pod_ready.go:81] duration metric: took 7.619372ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.636043  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636072  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.972993  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973028  452488 pod_ready.go:81] duration metric: took 336.946722ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.973040  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973046  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.371951  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.371991  452488 pod_ready.go:81] duration metric: took 398.932785ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.372016  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.372026  452488 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.775778  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775825  452488 pod_ready.go:81] duration metric: took 403.787436ms waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.775842  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775867  452488 pod_ready.go:38] duration metric: took 1.201917208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:24.775895  452488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:10:24.793136  452488 ops.go:34] apiserver oom_adj: -16
	I0109 00:10:24.793169  452488 kubeadm.go:640] restartCluster took 22.990690796s
	I0109 00:10:24.793182  452488 kubeadm.go:406] StartCluster complete in 23.05448254s
	I0109 00:10:24.793207  452488 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.793302  452488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:10:24.795707  452488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.796107  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:10:24.796368  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:10:24.796346  452488 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:10:24.796413  452488 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796432  452488 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796457  452488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-834116"
	I0109 00:10:24.796466  452488 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.796477  452488 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:10:24.796560  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796982  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.796998  452488 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.797017  452488 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:24.797020  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0109 00:10:24.797025  452488 addons.go:246] addon metrics-server should already be in state true
	I0109 00:10:24.797083  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797296  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.797477  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797513  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.803857  452488 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-834116" context rescaled to 1 replicas
	I0109 00:10:24.803958  452488 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:10:24.806278  452488 out.go:177] * Verifying Kubernetes components...
	I0109 00:10:24.807850  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:10:24.817319  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0109 00:10:24.817600  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0109 00:10:24.817766  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818023  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818247  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818270  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.818697  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.818899  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818913  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0109 00:10:24.818937  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.819412  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.819459  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.823502  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.823611  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.824834  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.824859  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.824880  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.825291  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.826131  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.826160  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.829056  452488 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.829115  452488 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:10:24.829158  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.829610  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.829968  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.839969  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0109 00:10:24.840508  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.841140  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.841167  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.841542  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.841864  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.843844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.846088  452488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:24.844882  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0109 00:10:24.848051  452488 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:24.848069  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:10:24.848093  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.848445  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.849053  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.849074  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.849484  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.849550  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0109 00:10:24.849671  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.851401  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.851914  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.851961  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.853938  452488 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:10:22.516402  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:24.907337  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.059397  451943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.42624365s)
	I0109 00:10:21.059430  451943 crio.go:451] Took 3.426440 seconds to extract the tarball
	I0109 00:10:21.059441  451943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:21.109544  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:21.177321  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:21.177353  451943 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:10:21.177408  451943 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.177455  451943 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.177499  451943 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.177679  451943 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.177728  451943 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.177688  451943 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179256  451943 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179325  451943 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0109 00:10:21.179257  451943 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.179429  451943 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.179551  451943 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.179599  451943 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.179888  451943 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.180077  451943 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.354975  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0109 00:10:21.363097  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.390461  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.393703  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.423416  451943 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0109 00:10:21.423475  451943 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0109 00:10:21.423523  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.433698  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.446038  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.466118  451943 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0109 00:10:21.466213  451943 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.466351  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.499618  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.516687  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.517553  451943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0109 00:10:21.517576  451943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0109 00:10:21.517608  451943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.517642  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0109 00:10:21.517653  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.517609  451943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.517735  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.543109  451943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0109 00:10:21.543170  451943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.543228  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571015  451943 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0109 00:10:21.571069  451943 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.571122  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571130  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.627517  451943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0109 00:10:21.627573  451943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.627623  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.730620  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0109 00:10:21.730693  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.730751  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.730772  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.730775  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.730876  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.730899  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0109 00:10:21.730965  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.861219  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0109 00:10:21.861308  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0109 00:10:21.870996  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0109 00:10:21.871033  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0109 00:10:21.871087  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0109 00:10:21.871117  451943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0109 00:10:21.871136  451943 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.871176  451943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0109 00:10:23.431278  451943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.560066098s)
	I0109 00:10:23.431320  451943 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0109 00:10:23.431403  451943 cache_images.go:92] LoadImages completed in 2.25403413s
	W0109 00:10:23.431502  451943 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0109 00:10:23.431630  451943 ssh_runner.go:195] Run: crio config
	I0109 00:10:23.501412  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:23.501437  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:23.501460  451943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:23.501478  451943 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003293 NodeName:old-k8s-version-003293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0109 00:10:23.501642  451943 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003293"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-003293
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.81:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:23.501740  451943 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003293 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:10:23.501815  451943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0109 00:10:23.515496  451943 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:23.515613  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:23.528701  451943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0109 00:10:23.549023  451943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:23.568686  451943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0109 00:10:23.588702  451943 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:23.593056  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:23.609254  451943 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293 for IP: 192.168.72.81
	I0109 00:10:23.609338  451943 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:23.609556  451943 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:23.609643  451943 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:23.609767  451943 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/client.key
	I0109 00:10:23.609842  451943 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key.289ddd16
	I0109 00:10:23.609908  451943 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key
	I0109 00:10:23.610069  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:23.610137  451943 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:23.610158  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:23.610197  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:23.610232  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:23.610265  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:23.610323  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:23.611274  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:23.637653  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0109 00:10:23.664578  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:23.694133  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:10:23.722658  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:23.750223  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:23.778539  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:23.802865  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:23.829553  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:23.857468  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:23.886744  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:23.913384  451943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:23.931928  451943 ssh_runner.go:195] Run: openssl version
	I0109 00:10:23.938105  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:23.949750  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955870  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955954  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.962486  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:23.975292  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:23.988504  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.993956  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.994025  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:24.000015  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:24.010775  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:24.021665  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026909  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026972  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.032957  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:24.043813  451943 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:24.048745  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:24.055015  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:24.061551  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:24.068075  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:24.075942  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:24.081898  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:24.088900  451943 kubeadm.go:404] StartCluster: {Name:old-k8s-version-003293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:24.089008  451943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:24.089075  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:24.138907  451943 cri.go:89] found id: ""
	I0109 00:10:24.139089  451943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:24.152607  451943 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:24.152636  451943 kubeadm.go:636] restartCluster start
	I0109 00:10:24.152696  451943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:24.166246  451943 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.167660  451943 kubeconfig.go:92] found "old-k8s-version-003293" server: "https://192.168.72.81:8443"
	I0109 00:10:24.171161  451943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:24.183456  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.183533  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.197246  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.684537  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.684670  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.698158  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.184562  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.184662  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.196624  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.684258  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.684379  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.699808  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.852491  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.852608  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.852621  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.855293  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.855444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.855453  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:10:24.855467  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:10:24.855484  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.855664  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.855746  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.855858  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.856036  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.857435  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.857481  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.858678  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859181  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.859219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859402  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.859570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.859724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.859856  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.875791  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0109 00:10:24.876275  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.876817  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.876856  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.877200  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.877454  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.879333  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.879644  452488 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:24.879661  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:10:24.879677  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.882683  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.883208  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883504  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.883694  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.883877  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.884070  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:25.036727  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:25.071034  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:10:25.071059  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:10:25.079722  452488 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:25.079745  452488 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0109 00:10:25.096822  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:25.107155  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:10:25.107187  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:10:25.149550  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:25.149576  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:10:25.202736  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:26.696247  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.659482228s)
	I0109 00:10:26.696317  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696334  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696330  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.599464128s)
	I0109 00:10:26.696379  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696398  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696816  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696856  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696855  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696865  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696874  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696883  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696899  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696908  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696935  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696945  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.697254  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697306  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697406  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697461  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697410  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.712803  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.712835  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.713140  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.713162  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736360  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.533581555s)
	I0109 00:10:26.736408  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.736780  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.736826  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.736841  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736852  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.737154  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.737190  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.737205  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.737215  452488 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:26.739310  452488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:10:23.774928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.270567  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.740691  452488 addons.go:508] enable addons completed in 1.94435105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:10:27.084669  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:27.404032  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.407712  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.184150  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.184272  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.196020  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:26.684603  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.684710  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.699571  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.184212  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.184309  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.196193  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.684572  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.684658  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.697405  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.183918  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.184043  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.197428  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.684565  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.684683  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.698124  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.183601  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.183725  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.195941  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.683554  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.683647  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.695548  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.184015  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.184116  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.196332  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.684533  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.684661  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.697315  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.771203  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.269907  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.584966  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:30.585616  452488 node_ready.go:49] node "default-k8s-diff-port-834116" has status "Ready":"True"
	I0109 00:10:30.585646  452488 node_ready.go:38] duration metric: took 5.505876157s waiting for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:30.585661  452488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:30.593510  452488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602388  452488 pod_ready.go:92] pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.602420  452488 pod_ready.go:81] duration metric: took 8.875538ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602438  452488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608316  452488 pod_ready.go:92] pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.608343  452488 pod_ready.go:81] duration metric: took 5.896652ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608355  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614031  452488 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.614056  452488 pod_ready.go:81] duration metric: took 5.692676ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614068  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619101  452488 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.619120  452488 pod_ready.go:81] duration metric: took 5.045637ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619129  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986089  452488 pod_ready.go:92] pod "kube-proxy-p9dmf" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.986121  452488 pod_ready.go:81] duration metric: took 366.984678ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986135  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385215  452488 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:31.385244  452488 pod_ready.go:81] duration metric: took 399.100168ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385254  452488 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.904561  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.905393  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.183976  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.184088  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:31.683769  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.683876  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.695944  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.184543  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.184631  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.197273  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.683504  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.683613  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.696431  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.183904  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.183981  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.195623  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.684295  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.684408  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.697442  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.184151  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:34.184264  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:34.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.196409  451943 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:34.196451  451943 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:34.196467  451943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:34.196558  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:34.243566  451943 cri.go:89] found id: ""
	I0109 00:10:34.243656  451943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:34.260912  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:34.270763  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:34.270859  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280082  451943 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280114  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:34.411011  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.279804  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.503377  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.616758  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.707051  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:35.707153  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:33.771119  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.271823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.399336  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.893942  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.905685  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:38.408847  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.207669  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:36.708189  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.207300  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.259562  451943 api_server.go:72] duration metric: took 1.552509336s to wait for apiserver process to appear ...
	I0109 00:10:37.259602  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:37.259628  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:38.272478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.272571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:37.894659  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.393328  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.393530  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.260559  451943 api_server.go:269] stopped: https://192.168.72.81:8443/healthz: Get "https://192.168.72.81:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0109 00:10:42.260609  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.136163  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.136216  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.136236  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.196804  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.196846  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.260001  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.270495  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.270549  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:43.759989  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.813746  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.813787  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.260614  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.271111  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:44.271144  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.760496  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.771584  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:10:44.780881  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:10:44.780911  451943 api_server.go:131] duration metric: took 7.521300216s to wait for apiserver health ...
	I0109 00:10:44.780923  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:44.780933  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:44.783223  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:40.906182  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:43.407169  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.784832  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:44.802495  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:44.821665  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:44.832420  451943 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:44.832452  451943 system_pods.go:61] "coredns-5644d7b6d9-5hqlw" [b6d5e87b-e72e-47bb-92b2-afecece262c5] Running
	I0109 00:10:44.832456  451943 system_pods.go:61] "coredns-5644d7b6d9-j4nnt" [d8995b4a-0ebf-406b-9937-09ba09591c78] Running
	I0109 00:10:44.832462  451943 system_pods.go:61] "etcd-old-k8s-version-003293" [8b9f9b32-dfe9-4cfe-856b-3aec43645e1e] Running
	I0109 00:10:44.832467  451943 system_pods.go:61] "kube-apiserver-old-k8s-version-003293" [48f5c692-7501-45ae-a53a-49e330129c36] Running
	I0109 00:10:44.832471  451943 system_pods.go:61] "kube-controller-manager-old-k8s-version-003293" [e458a3e9-ae8b-4ab7-bdc5-61b4321cca4a] Running
	I0109 00:10:44.832475  451943 system_pods.go:61] "kube-proxy-bc4tl" [74020495-07c6-441b-9b46-2f6a103d65eb] Running
	I0109 00:10:44.832478  451943 system_pods.go:61] "kube-scheduler-old-k8s-version-003293" [6a8e330c-f4bb-4bfd-b610-9071077fbb0f] Running
	I0109 00:10:44.832482  451943 system_pods.go:61] "storage-provisioner" [cbfd54c3-1952-4c0f-9272-29e2a8a4d5ed] Running
	I0109 00:10:44.832489  451943 system_pods.go:74] duration metric: took 10.801262ms to wait for pod list to return data ...
	I0109 00:10:44.832498  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:44.836130  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:44.836175  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:44.836196  451943 node_conditions.go:105] duration metric: took 3.685161ms to run NodePressure ...
	I0109 00:10:44.836220  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:45.117528  451943 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:45.121965  451943 retry.go:31] will retry after 324.075641ms: kubelet not initialised
	I0109 00:10:45.451702  451943 retry.go:31] will retry after 510.869227ms: kubelet not initialised
	I0109 00:10:42.770145  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.271625  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.394539  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:46.894669  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.910325  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:48.406435  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.969561  451943 retry.go:31] will retry after 435.571732ms: kubelet not initialised
	I0109 00:10:46.411948  451943 retry.go:31] will retry after 1.046618493s: kubelet not initialised
	I0109 00:10:47.471972  451943 retry.go:31] will retry after 1.328746031s: kubelet not initialised
	I0109 00:10:48.805606  451943 retry.go:31] will retry after 1.964166074s: kubelet not initialised
	I0109 00:10:50.776656  451943 retry.go:31] will retry after 2.966424358s: kubelet not initialised
	I0109 00:10:47.271965  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.773571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.393384  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:51.393857  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:50.905980  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:52.404441  452237 pod_ready.go:92] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.404467  452237 pod_ready.go:81] duration metric: took 43.007278698s waiting for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.404477  452237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409827  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.409851  452237 pod_ready.go:81] duration metric: took 5.368556ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409862  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415211  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.415233  452237 pod_ready.go:81] duration metric: took 5.363915ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415243  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420309  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.420329  452237 pod_ready.go:81] duration metric: took 5.078283ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420337  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425229  452237 pod_ready.go:92] pod "kube-proxy-kxjqj" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.425251  452237 pod_ready.go:81] duration metric: took 4.908776ms waiting for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425260  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.801958  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.801989  452237 pod_ready.go:81] duration metric: took 376.723222ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.802000  452237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:54.811346  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.748552  451943 retry.go:31] will retry after 3.201777002s: kubelet not initialised
	I0109 00:10:52.273938  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:54.771590  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.775438  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.422099  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:55.894657  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:57.310528  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:59.313642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.956459  451943 retry.go:31] will retry after 6.469663917s: kubelet not initialised
	I0109 00:10:59.272417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.272940  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:58.393999  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:00.893766  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.809942  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.309972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:03.432087  451943 retry.go:31] will retry after 13.730562228s: kubelet not initialised
	I0109 00:11:03.771273  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.268462  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:02.894171  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.894858  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:07.393254  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.310613  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.812051  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.270554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:10.272757  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:09.893982  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.894729  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.310615  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:13.311452  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:12.770003  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.770452  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.393106  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:16.394348  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:15.809972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.309870  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:17.168682  451943 retry.go:31] will retry after 14.832819941s: kubelet not initialised
	I0109 00:11:17.271266  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:19.271908  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.771727  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.394025  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:20.808968  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:22.810167  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.773732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:26.269527  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.394213  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.893851  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.310683  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:27.810354  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:29.814175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.271026  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.271149  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.393310  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.393582  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.310474  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.312045  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.007072  451943 kubeadm.go:787] kubelet initialised
	I0109 00:11:32.007097  451943 kubeadm.go:788] duration metric: took 46.889534921s waiting for restarted kubelet to initialise ...
	I0109 00:11:32.007109  451943 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:11:32.012969  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018937  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.018957  451943 pod_ready.go:81] duration metric: took 5.963591ms waiting for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018975  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028039  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.028067  451943 pod_ready.go:81] duration metric: took 9.084525ms waiting for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028078  451943 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032808  451943 pod_ready.go:92] pod "etcd-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.032832  451943 pod_ready.go:81] duration metric: took 4.746043ms waiting for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032843  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037435  451943 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.037466  451943 pod_ready.go:81] duration metric: took 4.610014ms waiting for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037478  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405716  451943 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.405742  451943 pod_ready.go:81] duration metric: took 368.257236ms waiting for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405760  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806721  451943 pod_ready.go:92] pod "kube-proxy-bc4tl" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.806747  451943 pod_ready.go:81] duration metric: took 400.981273ms waiting for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806756  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205810  451943 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:33.205840  451943 pod_ready.go:81] duration metric: took 399.074693ms waiting for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205855  451943 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:35.213679  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.271553  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.773998  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.893079  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:35.393616  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.393839  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:36.809214  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:38.809702  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.714222  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.213748  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.270073  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.270564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.771950  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.894200  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.895632  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.810676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:43.310394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:42.214955  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.713236  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.270745  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.769008  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.395323  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.893378  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:45.811067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.310292  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.713278  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:49.212583  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.769858  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.270380  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.894013  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.896386  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.311125  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:52.809499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:54.811339  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.213641  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.214157  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.711725  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.271867  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.771478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.393541  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.894575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.310953  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:59.809359  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.713429  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.215472  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.270445  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.770718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.393555  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:01.810389  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:04.311994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:02.713532  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.213545  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.270633  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.771349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.392243  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.393601  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:06.809758  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.310090  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.713345  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.713636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.774207  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:10.271536  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.892992  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.894465  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.394064  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.310240  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.311902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.713857  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.714968  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.770737  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.271471  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:14.893031  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.393146  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.312766  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.808902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:16.213122  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:18.215771  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.713269  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.772762  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.274611  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:19.399686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:21.895279  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.315434  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.809703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.813460  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:23.215054  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.216598  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.771192  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.271732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.392768  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:26.393642  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.309913  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.310558  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.713280  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.713388  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.771683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.269862  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:28.892939  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.894280  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:31.310860  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.313161  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.215375  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.713965  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.271111  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.770162  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.393271  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.393849  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.811747  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:38.311158  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.212773  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.712777  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.273180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.274403  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.770772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.893508  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.893834  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.394002  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:40.311402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.809836  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.714285  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.213161  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:43.772982  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.269879  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.893044  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.894333  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:45.310764  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:47.810622  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.213392  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.214029  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.712956  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.273388  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.772779  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:49.393068  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:51.894350  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.314344  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:52.809208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.809757  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.213473  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.213609  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.270014  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.270513  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.392981  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:56.896752  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.310923  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.809897  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.713409  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:00.213074  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.771956  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.772597  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.776736  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.392477  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.393047  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.810055  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.316038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:02.214227  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.714073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.271552  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.274081  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:03.394211  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:05.892722  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.808153  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.809658  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.213252  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:09.214016  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.771514  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.271265  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.893535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.394062  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.811210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.309480  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.713294  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.714070  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.274656  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.770363  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:12.892232  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:14.892967  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.893970  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.309955  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.310537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.312112  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.213649  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:18.712398  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:20.713447  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.770504  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.776344  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.391934  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.393412  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.809067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.811245  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.715248  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.215489  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.270417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:24.276304  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.771255  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.892801  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.395553  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.815479  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.309581  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:27.713470  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:29.713667  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.772564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.270216  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.892655  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.893557  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.310454  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.311950  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.809831  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.714418  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.213103  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:33.270895  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.772159  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.894686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.393366  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.810699  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.315029  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.217502  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:38.713073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.772491  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:40.269651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.894503  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.895994  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.393607  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.808659  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.809657  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.212704  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.713415  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.270157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.769816  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.770516  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.394641  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.895010  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.310425  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.310812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.213445  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.714493  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:49.270269  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.262625  451984 pod_ready.go:81] duration metric: took 4m0.000332739s waiting for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	E0109 00:13:50.262665  451984 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:13:50.262695  451984 pod_ready.go:38] duration metric: took 4m14.064299354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:13:50.262735  451984 kubeadm.go:640] restartCluster took 4m35.223413047s
	W0109 00:13:50.262837  451984 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:13:50.262989  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:13:49.394039  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.893287  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.809875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.311275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.214302  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.215860  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.714407  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.893351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.895250  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.811061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:57.811763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.213089  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.214795  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.393252  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.394330  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.395864  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:03.952243  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.689217944s)
	I0109 00:14:03.952404  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:03.965852  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:14:03.975784  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:14:03.984599  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:14:03.984649  451984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:14:04.041116  451984 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:14:04.041179  451984 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:14:04.213643  451984 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:14:04.213797  451984 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:14:04.213932  451984 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:14:04.470597  451984 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:14:00.312213  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.813799  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.816592  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.472836  451984 out.go:204]   - Generating certificates and keys ...
	I0109 00:14:04.473031  451984 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:14:04.473115  451984 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:14:04.473210  451984 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:14:04.473272  451984 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:14:04.473376  451984 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:14:04.473804  451984 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:14:04.474373  451984 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:14:04.474832  451984 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:14:04.475386  451984 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:14:04.475875  451984 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:14:04.476290  451984 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:14:04.476378  451984 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:14:04.599856  451984 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:14:04.905946  451984 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:14:05.274703  451984 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:14:05.463087  451984 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:14:05.464020  451984 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:14:05.468993  451984 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:14:02.215257  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.714764  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:05.471038  451984 out.go:204]   - Booting up control plane ...
	I0109 00:14:05.471146  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:14:05.471245  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:14:05.471342  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:14:05.488208  451984 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:14:05.489177  451984 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:14:05.489282  451984 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:14:05.629700  451984 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:14:04.895593  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.396575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.310589  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.809734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.212902  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.214384  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.895351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:12.397437  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.633863  451984 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004133 seconds
	I0109 00:14:13.634067  451984 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:14:13.657224  451984 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:14:14.196593  451984 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:14:14.196798  451984 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-845373 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:14:14.715124  451984 kubeadm.go:322] [bootstrap-token] Using token: 0z1u86.ex8qfq3o12xtqu87
	I0109 00:14:14.716600  451984 out.go:204]   - Configuring RBAC rules ...
	I0109 00:14:14.716727  451984 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:14:14.724791  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:14:14.734361  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:14:14.742345  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:14:14.749616  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:14:14.753942  451984 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:14:14.774188  451984 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:14:15.042710  451984 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:14:15.131751  451984 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:14:15.132745  451984 kubeadm.go:322] 
	I0109 00:14:15.132804  451984 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:14:15.132810  451984 kubeadm.go:322] 
	I0109 00:14:15.132872  451984 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:14:15.132879  451984 kubeadm.go:322] 
	I0109 00:14:15.132898  451984 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:14:15.132959  451984 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:14:15.133067  451984 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:14:15.133094  451984 kubeadm.go:322] 
	I0109 00:14:15.133160  451984 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:14:15.133173  451984 kubeadm.go:322] 
	I0109 00:14:15.133229  451984 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:14:15.133241  451984 kubeadm.go:322] 
	I0109 00:14:15.133313  451984 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:14:15.133412  451984 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:14:15.133510  451984 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:14:15.133524  451984 kubeadm.go:322] 
	I0109 00:14:15.133644  451984 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:14:15.133761  451984 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:14:15.133777  451984 kubeadm.go:322] 
	I0109 00:14:15.133882  451984 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134003  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:14:15.134030  451984 kubeadm.go:322] 	--control-plane 
	I0109 00:14:15.134037  451984 kubeadm.go:322] 
	I0109 00:14:15.134137  451984 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:14:15.134145  451984 kubeadm.go:322] 
	I0109 00:14:15.134240  451984 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134415  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:14:15.135483  451984 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:14:15.135524  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:14:15.135536  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:14:15.137331  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:14:11.810358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.813252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:11.214971  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.713322  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.714895  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.138794  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:14:15.164722  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:14:15.236472  451984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:14:15.236536  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.236558  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=embed-certs-845373 minikube.k8s.io/updated_at=2024_01_09T00_14_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.353564  451984 ops.go:34] apiserver oom_adj: -16
	I0109 00:14:15.675801  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.176590  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.676619  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:17.176120  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:14.893438  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.896780  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.311939  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.312023  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.213002  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.214958  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:17.676614  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.176469  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.676367  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.176646  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.676613  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.176615  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.676641  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.176075  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.676489  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:22.176784  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.395936  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:21.892353  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.810687  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.810879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.713569  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:25.213852  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.676054  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.176662  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.676911  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.175927  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.676685  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.176625  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.676281  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.176650  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.675943  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.176834  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.894745  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:26.394535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.676594  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.846642  451984 kubeadm.go:1088] duration metric: took 12.610179243s to wait for elevateKubeSystemPrivileges.
	I0109 00:14:27.846694  451984 kubeadm.go:406] StartCluster complete in 5m12.860674926s
	I0109 00:14:27.846775  451984 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.846922  451984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:14:27.849568  451984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.849886  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:14:27.850039  451984 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:14:27.850143  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:14:27.850168  451984 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845373"
	I0109 00:14:27.850185  451984 addons.go:69] Setting metrics-server=true in profile "embed-certs-845373"
	I0109 00:14:27.850196  451984 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-845373"
	W0109 00:14:27.850206  451984 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:14:27.850209  451984 addons.go:237] Setting addon metrics-server=true in "embed-certs-845373"
	W0109 00:14:27.850226  451984 addons.go:246] addon metrics-server should already be in state true
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850780  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850804  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850886  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850916  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850174  451984 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845373"
	I0109 00:14:27.850983  451984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845373"
	I0109 00:14:27.851436  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.851473  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.869118  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0109 00:14:27.869634  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.870272  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.870301  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.870793  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.870883  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0109 00:14:27.871047  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0109 00:14:27.871320  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871380  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871694  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.871740  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.871880  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871910  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.871917  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871934  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.872311  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872318  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872472  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.872864  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.872907  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.875833  451984 addons.go:237] Setting addon default-storageclass=true in "embed-certs-845373"
	W0109 00:14:27.875851  451984 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:14:27.875874  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.876143  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.876172  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0109 00:14:27.892642  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0109 00:14:27.893165  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893218  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893382  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893725  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893751  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.893889  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893906  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894287  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894344  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894351  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.894366  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894531  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.894905  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894920  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.894955  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.895325  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.897315  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.897565  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.899343  451984 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:14:27.901058  451984 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:14:27.903097  451984 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:27.903113  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:14:27.903129  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.901085  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:14:27.903182  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:14:27.903190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.907703  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908100  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908474  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908505  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908744  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908765  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908869  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.908924  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.909079  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909118  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909274  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909303  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909444  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.909660  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.913404  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0109 00:14:27.913992  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.914388  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.914409  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.914831  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.915055  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.916650  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.916872  451984 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:27.916891  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:14:27.916911  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.919557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.919945  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.919962  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.920188  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.920346  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.920520  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.920627  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:28.169436  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:14:28.180527  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:28.194004  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:14:28.194025  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:14:28.216619  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:28.258292  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:14:28.258321  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:14:28.320624  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:28.320652  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:14:28.355471  451984 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-845373" context rescaled to 1 replicas
	I0109 00:14:28.355514  451984 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:14:28.357573  451984 out.go:177] * Verifying Kubernetes components...
	I0109 00:14:25.309676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.312462  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:29.810262  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:28.359075  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:28.379542  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:30.061115  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.891626144s)
	I0109 00:14:30.061149  451984 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0109 00:14:30.452861  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.236197297s)
	I0109 00:14:30.452929  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.452943  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.452943  451984 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.09383281s)
	I0109 00:14:30.453122  451984 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.453131  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.272573904s)
	I0109 00:14:30.453293  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453306  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453320  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453311  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453332  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453342  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453674  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453693  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453700  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453708  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453740  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453752  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453764  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453784  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.454074  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.454093  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.454107  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.457209  451984 node_ready.go:49] node "embed-certs-845373" has status "Ready":"True"
	I0109 00:14:30.457229  451984 node_ready.go:38] duration metric: took 4.077361ms waiting for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.457238  451984 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:30.488244  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.488275  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.488609  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.488634  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.488660  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.489887  451984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:30.508615  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.129028413s)
	I0109 00:14:30.508663  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.508677  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.508966  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.509058  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509152  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509175  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.509190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.509535  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509564  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509578  451984 addons.go:473] Verifying addon metrics-server=true in "embed-certs-845373"
	I0109 00:14:30.509582  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.511636  451984 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:14:27.714663  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.213049  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.513246  451984 addons.go:508] enable addons completed in 2.663216413s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:14:31.999091  451984 pod_ready.go:92] pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:31.999122  451984 pod_ready.go:81] duration metric: took 1.509214799s waiting for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:31.999131  451984 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005047  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.005077  451984 pod_ready.go:81] duration metric: took 5.937291ms waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005091  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011823  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.011853  451984 pod_ready.go:81] duration metric: took 6.752071ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011866  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017760  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.017782  451984 pod_ready.go:81] duration metric: took 5.908986ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017792  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058063  451984 pod_ready.go:92] pod "kube-proxy-nxtn2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.058094  451984 pod_ready.go:81] duration metric: took 40.295825ms waiting for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058104  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:28.397781  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.894153  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:31.394151  452488 pod_ready.go:81] duration metric: took 4m0.008881128s waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:31.394180  452488 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:14:31.394191  452488 pod_ready.go:38] duration metric: took 4m0.808517944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:31.394210  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:14:31.394307  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:31.394397  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:31.457897  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:31.457929  452488 cri.go:89] found id: ""
	I0109 00:14:31.457941  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:31.458002  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.463534  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:31.463632  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:31.524249  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:31.524284  452488 cri.go:89] found id: ""
	I0109 00:14:31.524296  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:31.524363  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.529188  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:31.529260  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:31.583505  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:31.583543  452488 cri.go:89] found id: ""
	I0109 00:14:31.583554  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:31.583618  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.589373  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:31.589466  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:31.639895  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:31.639931  452488 cri.go:89] found id: ""
	I0109 00:14:31.639942  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:31.640016  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.644881  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:31.644952  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:31.686002  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.686031  452488 cri.go:89] found id: ""
	I0109 00:14:31.686047  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:31.686114  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.691664  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:31.691754  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:31.745729  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:31.745757  452488 cri.go:89] found id: ""
	I0109 00:14:31.745766  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:31.745829  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.751116  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:31.751192  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:31.794856  452488 cri.go:89] found id: ""
	I0109 00:14:31.794890  452488 logs.go:284] 0 containers: []
	W0109 00:14:31.794901  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:31.794909  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:31.794976  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:31.840973  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:31.840999  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:31.841006  452488 cri.go:89] found id: ""
	I0109 00:14:31.841014  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:31.841084  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.845852  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.850824  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:31.850851  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:31.914344  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:31.914404  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.958899  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:31.958934  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:32.021319  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:32.021353  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:32.074995  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:32.075034  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:32.089535  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:32.089572  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:32.244418  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:32.244460  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:32.288116  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:32.288161  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:32.332939  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:32.332980  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:32.378455  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:32.378487  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:32.437376  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:32.437421  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:31.813208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.311338  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.215522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.712223  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.460309  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.460343  451984 pod_ready.go:81] duration metric: took 402.230769ms waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.460358  451984 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:34.470103  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.470854  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.911300  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:32.911345  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:32.959902  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:32.959942  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.500402  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:14:35.516569  452488 api_server.go:72] duration metric: took 4m10.712558057s to wait for apiserver process to appear ...
	I0109 00:14:35.516600  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:14:35.516640  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:35.516690  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:35.559395  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:35.559421  452488 cri.go:89] found id: ""
	I0109 00:14:35.559429  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:35.559497  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.564381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:35.564468  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:35.604963  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:35.604991  452488 cri.go:89] found id: ""
	I0109 00:14:35.605004  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:35.605074  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.610352  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:35.610412  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:35.655316  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.655353  452488 cri.go:89] found id: ""
	I0109 00:14:35.655381  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:35.655471  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.660932  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:35.661015  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:35.702201  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:35.702228  452488 cri.go:89] found id: ""
	I0109 00:14:35.702237  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:35.702297  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.707544  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:35.707615  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:35.755445  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:35.755478  452488 cri.go:89] found id: ""
	I0109 00:14:35.755489  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:35.755555  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.760393  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:35.760470  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:35.813641  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:35.813672  452488 cri.go:89] found id: ""
	I0109 00:14:35.813682  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:35.813749  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.819342  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:35.819495  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:35.861693  452488 cri.go:89] found id: ""
	I0109 00:14:35.861723  452488 logs.go:284] 0 containers: []
	W0109 00:14:35.861732  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:35.861740  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:35.861807  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:35.900886  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:35.900931  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:35.900937  452488 cri.go:89] found id: ""
	I0109 00:14:35.900945  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:35.901005  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.905463  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.910271  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:35.910300  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:36.056761  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:36.056798  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:36.096707  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:36.096739  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:36.555891  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:36.555936  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:36.573167  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:36.573196  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:36.622139  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:36.622169  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:36.680395  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:36.680435  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:36.740350  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:36.740389  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:36.779409  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:36.779443  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:36.837425  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:36.837474  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:36.892724  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:36.892763  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:36.939944  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:36.939979  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:36.999567  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:36.999612  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:36.810729  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.810924  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.713630  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.213516  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.970746  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.468803  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.546015  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:14:39.551932  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:14:39.553444  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:14:39.553469  452488 api_server.go:131] duration metric: took 4.036861283s to wait for apiserver health ...
	I0109 00:14:39.553480  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:14:39.553512  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:39.553592  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:39.597338  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:39.597368  452488 cri.go:89] found id: ""
	I0109 00:14:39.597381  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:39.597450  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.602381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:39.602473  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:39.643738  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:39.643776  452488 cri.go:89] found id: ""
	I0109 00:14:39.643787  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:39.643854  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.649021  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:39.649096  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:39.692903  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:39.692926  452488 cri.go:89] found id: ""
	I0109 00:14:39.692934  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:39.692992  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.697806  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:39.697882  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:39.746679  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:39.746706  452488 cri.go:89] found id: ""
	I0109 00:14:39.746716  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:39.746765  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.752396  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:39.752459  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:39.800438  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:39.800461  452488 cri.go:89] found id: ""
	I0109 00:14:39.800470  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:39.800535  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.805644  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:39.805737  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:39.847341  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:39.847387  452488 cri.go:89] found id: ""
	I0109 00:14:39.847398  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:39.847465  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.851972  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:39.852053  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:39.899183  452488 cri.go:89] found id: ""
	I0109 00:14:39.899219  452488 logs.go:284] 0 containers: []
	W0109 00:14:39.899231  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:39.899239  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:39.899309  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:39.958353  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:39.958395  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:39.958400  452488 cri.go:89] found id: ""
	I0109 00:14:39.958409  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:39.958469  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.963264  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.968827  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:39.968858  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:40.015655  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:40.015685  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:40.161910  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:40.161944  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:40.200197  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:40.200233  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:40.244075  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:40.244119  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:40.655095  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:40.655160  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:40.711957  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:40.712004  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:40.765456  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:40.765503  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:40.824273  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:40.824320  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:40.887213  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:40.887252  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:40.925809  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:40.925842  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:40.967599  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:40.967635  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:41.021163  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:41.021219  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:43.543901  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:14:43.543933  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.543938  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.543943  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.543947  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.543951  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.543955  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.543962  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.543966  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.543974  452488 system_pods.go:74] duration metric: took 3.990487712s to wait for pod list to return data ...
	I0109 00:14:43.543982  452488 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:14:43.547032  452488 default_sa.go:45] found service account: "default"
	I0109 00:14:43.547063  452488 default_sa.go:55] duration metric: took 3.07377ms for default service account to be created ...
	I0109 00:14:43.547075  452488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:14:43.554265  452488 system_pods.go:86] 8 kube-system pods found
	I0109 00:14:43.554305  452488 system_pods.go:89] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.554314  452488 system_pods.go:89] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.554322  452488 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.554329  452488 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.554336  452488 system_pods.go:89] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.554343  452488 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.554356  452488 system_pods.go:89] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.554397  452488 system_pods.go:89] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.554420  452488 system_pods.go:126] duration metric: took 7.336546ms to wait for k8s-apps to be running ...
	I0109 00:14:43.554431  452488 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:14:43.554494  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:43.570839  452488 system_svc.go:56] duration metric: took 16.394034ms WaitForService to wait for kubelet.
	I0109 00:14:43.570874  452488 kubeadm.go:581] duration metric: took 4m18.766870325s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:14:43.570904  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:14:43.575087  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:14:43.575115  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:14:43.575127  452488 node_conditions.go:105] duration metric: took 4.218446ms to run NodePressure ...
	I0109 00:14:43.575139  452488 start.go:228] waiting for startup goroutines ...
	I0109 00:14:43.575145  452488 start.go:233] waiting for cluster config update ...
	I0109 00:14:43.575154  452488 start.go:242] writing updated cluster config ...
	I0109 00:14:43.575452  452488 ssh_runner.go:195] Run: rm -f paused
	I0109 00:14:43.636407  452488 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:14:43.638597  452488 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-834116" cluster and "default" namespace by default
	I0109 00:14:40.814426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.310989  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.214186  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.714118  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.968087  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.968943  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.809788  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:47.810189  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:46.213897  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.714327  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.716636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.472384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.473405  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.310188  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.311048  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.803108  452237 pod_ready.go:81] duration metric: took 4m0.001087466s waiting for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:52.803148  452237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:14:52.803179  452237 pod_ready.go:38] duration metric: took 4m43.413410939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:52.803217  452237 kubeadm.go:640] restartCluster took 5m4.419560589s
	W0109 00:14:52.803342  452237 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:14:52.803433  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:14:53.213308  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.215229  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.972718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.470546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.714170  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:00.213742  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.968558  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:59.969971  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:01.970573  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:02.713539  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:05.213339  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:04.470909  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:06.976278  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:07.153986  452237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.350512063s)
	I0109 00:15:07.154091  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:07.169206  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:07.180120  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:07.190689  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:07.190746  452237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:15:07.249723  452237 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0109 00:15:07.249803  452237 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:07.413454  452237 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:07.413648  452237 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:07.413809  452237 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:07.666677  452237 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:07.668620  452237 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:07.668736  452237 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:07.668869  452237 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:07.669044  452237 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:07.669122  452237 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:07.669206  452237 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:07.669265  452237 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:07.669338  452237 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:07.669409  452237 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:07.669493  452237 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:07.669587  452237 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:07.669632  452237 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:07.669698  452237 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:07.892774  452237 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:08.387341  452237 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0109 00:15:08.697850  452237 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:09.110380  452237 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:09.182970  452237 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:09.183625  452237 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:09.186350  452237 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:09.188402  452237 out.go:204]   - Booting up control plane ...
	I0109 00:15:09.188494  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:09.188620  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:09.190877  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:09.210069  452237 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:09.213806  452237 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:09.214168  452237 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:15:09.348180  452237 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:07.713522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:10.212932  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:09.468413  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:11.472366  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:12.214158  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:14.713831  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:13.968332  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:15.970174  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:17.853084  452237 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502974 seconds
	I0109 00:15:17.871025  452237 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:17.897430  452237 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:18.444483  452237 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:18.444785  452237 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-378213 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:15:18.959611  452237 kubeadm.go:322] [bootstrap-token] Using token: dhjf8u.939ptni0q22ypfw8
	I0109 00:15:18.961445  452237 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:18.961621  452237 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:18.976769  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:15:18.986315  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:18.991512  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:18.996317  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:19.001219  452237 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:19.018739  452237 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:15:19.300703  452237 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:19.384320  452237 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:19.385524  452237 kubeadm.go:322] 
	I0109 00:15:19.385609  452237 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:19.385646  452237 kubeadm.go:322] 
	I0109 00:15:19.385746  452237 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:19.385759  452237 kubeadm.go:322] 
	I0109 00:15:19.385780  452237 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:19.385851  452237 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:19.385894  452237 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:19.385902  452237 kubeadm.go:322] 
	I0109 00:15:19.385976  452237 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:15:19.385984  452237 kubeadm.go:322] 
	I0109 00:15:19.386052  452237 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:15:19.386063  452237 kubeadm.go:322] 
	I0109 00:15:19.386140  452237 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:19.386255  452237 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:19.386338  452237 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:19.386348  452237 kubeadm.go:322] 
	I0109 00:15:19.386445  452237 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:15:19.386563  452237 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:19.386588  452237 kubeadm.go:322] 
	I0109 00:15:19.386704  452237 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.386865  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:19.386893  452237 kubeadm.go:322] 	--control-plane 
	I0109 00:15:19.386900  452237 kubeadm.go:322] 
	I0109 00:15:19.387013  452237 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:19.387023  452237 kubeadm.go:322] 
	I0109 00:15:19.387156  452237 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.387306  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:19.388274  452237 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:19.388386  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:15:19.388404  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:19.390641  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:19.392729  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:19.420375  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:19.480953  452237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:19.481036  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.481070  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=no-preload-378213 minikube.k8s.io/updated_at=2024_01_09T00_15_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.529444  452237 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:19.828947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:17.214395  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:19.714562  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:18.467657  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.469306  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.329278  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:20.829730  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.329756  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.829370  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.329549  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.829161  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.329937  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.829891  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.329077  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.829276  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.715433  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.214554  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:22.469602  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.968838  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:25.329025  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:25.829279  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.329947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.829794  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.329030  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.829080  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.329613  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.829372  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.329826  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.829063  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.712393  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:28.715010  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:30.329991  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:30.829320  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.329115  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.423331  452237 kubeadm.go:1088] duration metric: took 11.942366757s to wait for elevateKubeSystemPrivileges.
	I0109 00:15:31.423377  452237 kubeadm.go:406] StartCluster complete in 5m43.086225729s
	I0109 00:15:31.423405  452237 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.423510  452237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:15:31.425917  452237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.426178  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:15:31.426284  452237 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:15:31.426369  452237 addons.go:69] Setting storage-provisioner=true in profile "no-preload-378213"
	I0109 00:15:31.426384  452237 addons.go:69] Setting default-storageclass=true in profile "no-preload-378213"
	I0109 00:15:31.426397  452237 addons.go:237] Setting addon storage-provisioner=true in "no-preload-378213"
	W0109 00:15:31.426409  452237 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:15:31.426432  452237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-378213"
	I0109 00:15:31.426447  452237 addons.go:69] Setting metrics-server=true in profile "no-preload-378213"
	I0109 00:15:31.426476  452237 addons.go:237] Setting addon metrics-server=true in "no-preload-378213"
	W0109 00:15:31.426484  452237 addons.go:246] addon metrics-server should already be in state true
	I0109 00:15:31.426485  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426540  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426434  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:15:31.426891  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426918  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426927  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426931  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.446291  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0109 00:15:31.446423  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0109 00:15:31.446819  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0109 00:15:31.447018  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.447639  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.447724  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447854  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.448095  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448259  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448288  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448354  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.448439  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448465  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448921  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448997  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.449699  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449744  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.449757  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449785  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.452784  452237 addons.go:237] Setting addon default-storageclass=true in "no-preload-378213"
	W0109 00:15:31.452809  452237 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:15:31.452841  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.454376  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.454416  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.467638  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0109 00:15:31.468325  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.468901  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.468921  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.469339  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.469563  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.471409  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.473329  452237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:15:31.474680  452237 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.474693  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:15:31.474706  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.473604  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0109 00:15:31.474062  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0109 00:15:31.475095  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475399  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.475627  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.475979  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.476163  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.477959  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.479656  452237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:15:31.478629  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.479280  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.479557  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.480974  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.481058  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:15:31.481066  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:15:31.481079  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.481110  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.481128  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.481308  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.481878  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.482384  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.483085  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.483645  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.483668  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.484708  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485095  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.485117  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485318  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.487608  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.487807  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.487999  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.499347  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0109 00:15:31.499913  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.500547  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.500570  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.500917  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.501145  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.503016  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.503296  452237 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.503310  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:15:31.503325  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.506091  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506397  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.506455  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506652  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.506831  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.506978  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.507091  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.624782  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:15:31.642826  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.663296  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.710300  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:15:31.710330  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:15:31.787478  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:15:31.787517  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:15:31.871349  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:31.871407  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:15:31.968192  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:32.072474  452237 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-378213" context rescaled to 1 replicas
	I0109 00:15:32.072532  452237 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:15:32.074625  452237 out.go:177] * Verifying Kubernetes components...
	I0109 00:15:27.468923  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:29.971742  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:32.075944  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:32.439632  452237 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0109 00:15:32.439722  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.439751  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440089  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440193  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.440209  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.440219  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440166  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440559  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440571  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440580  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.497313  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.497346  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.497717  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.497747  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901192  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.237846158s)
	I0109 00:15:32.901262  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901276  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901654  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.901703  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901719  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901730  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901662  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902029  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902069  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.902079  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030220  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.061947007s)
	I0109 00:15:33.030237  452237 node_ready.go:35] waiting up to 6m0s for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.030290  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030308  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.030694  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.030714  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030725  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030734  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.031003  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.031022  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.031034  452237 addons.go:473] Verifying addon metrics-server=true in "no-preload-378213"
	I0109 00:15:33.032849  452237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:15:33.034106  452237 addons.go:508] enable addons completed in 1.60782305s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:15:33.044548  452237 node_ready.go:49] node "no-preload-378213" has status "Ready":"True"
	I0109 00:15:33.044577  452237 node_ready.go:38] duration metric: took 14.31045ms waiting for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.044592  452237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.060577  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:34.066536  452237 pod_ready.go:97] error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066570  452237 pod_ready.go:81] duration metric: took 1.005962139s waiting for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:34.066584  452237 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066594  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:31.213050  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:33.206836  451943 pod_ready.go:81] duration metric: took 4m0.000952779s waiting for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:33.206864  451943 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:15:33.206884  451943 pod_ready.go:38] duration metric: took 4m1.199765303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.206916  451943 kubeadm.go:640] restartCluster took 5m9.054273444s
	W0109 00:15:33.206995  451943 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:15:33.207029  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:15:32.469904  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:34.969702  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:36.074768  452237 pod_ready.go:92] pod "coredns-76f75df574-ztvgr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.074793  452237 pod_ready.go:81] duration metric: took 2.008191718s waiting for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.074803  452237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080586  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.080610  452237 pod_ready.go:81] duration metric: took 5.80009ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080623  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.085972  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.085995  452237 pod_ready.go:81] duration metric: took 5.365045ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.086004  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091275  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.091295  452237 pod_ready.go:81] duration metric: took 5.284302ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091306  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095919  452237 pod_ready.go:92] pod "kube-proxy-4vnf5" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.095938  452237 pod_ready.go:81] duration metric: took 4.624685ms waiting for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095949  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471021  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.471051  452237 pod_ready.go:81] duration metric: took 375.093915ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471066  452237 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:38.478891  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.932714  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.725641704s)
	I0109 00:15:39.932824  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:39.949655  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:39.967317  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:39.983553  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:39.983602  451943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0109 00:15:40.196509  451943 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:37.468440  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.468561  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:41.468728  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:40.481038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:42.979928  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:43.468928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.968791  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.479525  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.981785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:49.988192  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.970158  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:50.469209  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.798385  451943 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0109 00:15:53.798458  451943 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:53.798557  451943 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:53.798719  451943 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:53.798863  451943 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:53.799001  451943 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:53.799122  451943 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:53.799199  451943 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0109 00:15:53.799296  451943 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:53.800918  451943 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:53.801030  451943 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:53.801108  451943 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:53.801199  451943 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:53.801284  451943 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:53.801342  451943 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:53.801386  451943 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:53.801441  451943 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:53.801491  451943 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:53.801563  451943 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:53.801654  451943 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:53.801710  451943 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:53.801776  451943 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:53.801841  451943 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:53.801885  451943 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:53.801935  451943 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:53.802013  451943 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:53.802097  451943 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:53.803572  451943 out.go:204]   - Booting up control plane ...
	I0109 00:15:53.803682  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:53.803757  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:53.803811  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:53.803932  451943 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:53.804150  451943 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:53.804251  451943 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506007 seconds
	I0109 00:15:53.804388  451943 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:53.804541  451943 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:53.804628  451943 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:53.804832  451943 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-003293 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0109 00:15:53.804900  451943 kubeadm.go:322] [bootstrap-token] Using token: 4iop3a.ft6ghwlgcg45v9u4
	I0109 00:15:53.806501  451943 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:53.806592  451943 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:53.806724  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:53.806832  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:53.806959  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:53.807033  451943 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:53.807071  451943 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:53.807109  451943 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:53.807115  451943 kubeadm.go:322] 
	I0109 00:15:53.807175  451943 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:53.807199  451943 kubeadm.go:322] 
	I0109 00:15:53.807319  451943 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:53.807328  451943 kubeadm.go:322] 
	I0109 00:15:53.807353  451943 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:53.807457  451943 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:53.807531  451943 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:53.807541  451943 kubeadm.go:322] 
	I0109 00:15:53.807594  451943 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:53.807668  451943 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:53.807746  451943 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:53.807766  451943 kubeadm.go:322] 
	I0109 00:15:53.807884  451943 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0109 00:15:53.807989  451943 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:53.807998  451943 kubeadm.go:322] 
	I0109 00:15:53.808083  451943 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808215  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:53.808267  451943 kubeadm.go:322]     --control-plane 	  
	I0109 00:15:53.808282  451943 kubeadm.go:322] 
	I0109 00:15:53.808416  451943 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:53.808431  451943 kubeadm.go:322] 
	I0109 00:15:53.808535  451943 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808635  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:53.808646  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:15:53.808655  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:53.810445  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:52.478401  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.478468  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.812384  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:53.822034  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:53.841918  451943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:53.842007  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.842023  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=old-k8s-version-003293 minikube.k8s.io/updated_at=2024_01_09T00_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.878580  451943 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:54.119184  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:54.619596  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.119468  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.619508  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:52.969233  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.969384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.969570  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.978217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:59.478428  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.119299  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:56.620179  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.119526  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.619985  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.119330  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.619572  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.120142  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.619498  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.119329  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.620206  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.468767  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.969313  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.978314  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:03.979583  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.120279  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:01.619668  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.620169  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.120249  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.619563  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.619912  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.120243  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.620114  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.971649  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.468683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:05.980829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:08.479315  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.119938  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:06.619543  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.119220  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.619392  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.119991  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.619517  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.120205  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.620121  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.119909  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.273872  451943 kubeadm.go:1088] duration metric: took 16.431936842s to wait for elevateKubeSystemPrivileges.
	I0109 00:16:10.273910  451943 kubeadm.go:406] StartCluster complete in 5m46.185018744s
	I0109 00:16:10.273961  451943 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.274054  451943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:16:10.275851  451943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.276124  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:16:10.276261  451943 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:16:10.276362  451943 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276373  451943 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276388  451943 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-003293"
	I0109 00:16:10.276394  451943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-003293"
	I0109 00:16:10.276390  451943 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276415  451943 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-003293"
	W0109 00:16:10.276428  451943 addons.go:246] addon metrics-server should already be in state true
	I0109 00:16:10.276454  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:16:10.276481  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	W0109 00:16:10.276397  451943 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:16:10.276544  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.276864  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276880  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276867  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276941  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.276955  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.277062  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.294099  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I0109 00:16:10.294268  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0109 00:16:10.294410  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0109 00:16:10.294718  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294768  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294925  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.295279  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295305  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295388  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295419  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295397  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295480  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295693  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295769  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295788  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.296012  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.296310  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.296357  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.297119  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.297171  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.299887  451943 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-003293"
	W0109 00:16:10.299910  451943 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:16:10.299946  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.300224  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.300263  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.313007  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0109 00:16:10.313533  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.314010  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.314026  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.314437  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.314622  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.315598  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0109 00:16:10.316247  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.316532  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.318734  451943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:16:10.317343  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.317379  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0109 00:16:10.320285  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:16:10.320308  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:16:10.320329  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.320333  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.320705  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.320963  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.321103  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.321233  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.321247  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.321761  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.322210  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.322242  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.323835  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.324029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.324152  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.324177  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.326057  451943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:16:10.324406  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.328066  451943 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.328087  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:16:10.328096  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.328124  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.328784  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.329014  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.331395  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.331785  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.331810  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.332001  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.332191  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.332335  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.332480  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.347123  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0109 00:16:10.347716  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.348691  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.348719  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.349127  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.349342  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.350834  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.351133  451943 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.351149  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:16:10.351168  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.354189  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354621  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.354668  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354909  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.355119  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.355294  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.355481  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.515777  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.534034  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:16:10.534064  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:16:10.554850  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.584934  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:16:10.584964  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:16:10.615671  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:16:10.637303  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.637339  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:16:10.680679  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.830403  451943 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-003293" context rescaled to 1 replicas
	I0109 00:16:10.830449  451943 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:16:10.832633  451943 out.go:177] * Verifying Kubernetes components...
	I0109 00:16:10.834172  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:16:11.515705  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.515738  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516087  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.516123  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516132  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.516141  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.516151  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516389  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516407  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.571488  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.571524  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.571880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.571890  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.571911  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630216  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075317719s)
	I0109 00:16:11.630282  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630297  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.630308  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.014587881s)
	I0109 00:16:11.630345  451943 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0109 00:16:11.630710  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.630729  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630740  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.630751  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.631004  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.631032  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.631153  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.716276  451943 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.716463  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0357366s)
	I0109 00:16:11.716513  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716534  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.716848  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.716869  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.716878  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716889  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.717212  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.717222  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.717228  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.717245  451943 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-003293"
	I0109 00:16:11.719193  451943 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:16:08.968622  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.470234  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:10.479812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:12.984384  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.720570  451943 addons.go:508] enable addons completed in 1.44432074s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:16:11.733736  451943 node_ready.go:49] node "old-k8s-version-003293" has status "Ready":"True"
	I0109 00:16:11.733767  451943 node_ready.go:38] duration metric: took 17.451191ms waiting for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.733787  451943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:11.750301  451943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:13.762510  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:13.969774  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.468912  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:15.481249  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:17.978744  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:19.979938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.257523  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.259142  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.757454  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.469229  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.469761  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:22.478368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.978345  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:21.256765  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.256797  451943 pod_ready.go:81] duration metric: took 9.506455286s waiting for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.256807  451943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262633  451943 pod_ready.go:92] pod "kube-proxy-h8br2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.262651  451943 pod_ready.go:81] duration metric: took 5.836717ms waiting for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262660  451943 pod_ready.go:38] duration metric: took 9.52886361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:21.262697  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:16:21.262758  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:16:21.280249  451943 api_server.go:72] duration metric: took 10.449767566s to wait for apiserver process to appear ...
	I0109 00:16:21.280282  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:16:21.280305  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:16:21.286759  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:16:21.287885  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:16:21.287913  451943 api_server.go:131] duration metric: took 7.622726ms to wait for apiserver health ...
	I0109 00:16:21.287924  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:16:21.292745  451943 system_pods.go:59] 4 kube-system pods found
	I0109 00:16:21.292774  451943 system_pods.go:61] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.292782  451943 system_pods.go:61] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.292792  451943 system_pods.go:61] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.292799  451943 system_pods.go:61] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.292809  451943 system_pods.go:74] duration metric: took 4.87707ms to wait for pod list to return data ...
	I0109 00:16:21.292817  451943 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:16:21.295463  451943 default_sa.go:45] found service account: "default"
	I0109 00:16:21.295486  451943 default_sa.go:55] duration metric: took 2.661749ms for default service account to be created ...
	I0109 00:16:21.295495  451943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:16:21.299334  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.299369  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.299379  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.299389  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.299401  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.299419  451943 retry.go:31] will retry after 262.555966ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.567416  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.567444  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.567449  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.567456  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.567461  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.567483  451943 retry.go:31] will retry after 296.862413ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.869873  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.869910  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.869919  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.869932  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.869939  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.869960  451943 retry.go:31] will retry after 354.537219ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.229945  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.229973  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.229978  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.229985  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.229990  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.230008  451943 retry.go:31] will retry after 403.317754ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.639068  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.639100  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.639106  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.639115  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.639122  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.639145  451943 retry.go:31] will retry after 548.96975ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:23.193832  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:23.193865  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:23.193874  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:23.193884  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:23.193891  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:23.193912  451943 retry.go:31] will retry after 808.39734ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:24.007761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:24.007789  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:24.007794  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:24.007800  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:24.007805  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:24.007826  451943 retry.go:31] will retry after 1.084893616s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:25.097415  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:25.097446  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:25.097452  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:25.097461  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:25.097468  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:25.097488  451943 retry.go:31] will retry after 1.364718688s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.471347  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.968309  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.968540  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.981321  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:28.981763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.469277  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:26.469302  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:26.469308  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:26.469314  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:26.469319  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:26.469336  451943 retry.go:31] will retry after 1.608197445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.083522  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:28.083549  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:28.083554  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:28.083561  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:28.083566  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:28.083584  451943 retry.go:31] will retry after 1.803084046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:29.892783  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:29.892825  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:29.892834  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:29.892845  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:29.892852  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:29.892878  451943 retry.go:31] will retry after 2.500544298s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.970772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:30.972069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:31.478822  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:33.481537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:32.406761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:32.406791  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:32.406796  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:32.406803  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:32.406808  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:32.406826  451943 retry.go:31] will retry after 3.245901502s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:35.657591  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:35.657630  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:35.657636  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:35.657644  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:35.657650  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:35.657669  451943 retry.go:31] will retry after 2.987638992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:33.468927  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.968669  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.979914  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:37.982358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:38.652562  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:38.652589  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:38.652594  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:38.652600  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:38.652605  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:38.652621  451943 retry.go:31] will retry after 5.12035072s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:38.469167  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.469783  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.481402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:42.980559  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:43.778329  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:43.778358  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:43.778363  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:43.778370  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:43.778375  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:43.778392  451943 retry.go:31] will retry after 5.3812896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:42.972242  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.468157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.479217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:47.978368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.978994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.165092  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:49.165124  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:49.165129  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:49.165136  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:49.165142  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:49.165161  451943 retry.go:31] will retry after 8.788078847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:47.469586  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.968667  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.969102  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.979785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:53.984069  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:54.467285  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.469141  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.478629  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:58.479207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:57.958448  451943 system_pods.go:86] 5 kube-system pods found
	I0109 00:16:57.958475  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:57.958481  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Pending
	I0109 00:16:57.958485  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:57.958492  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:57.958497  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:57.958515  451943 retry.go:31] will retry after 8.563711001s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:58.470664  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.970608  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.481608  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:02.978829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:03.468919  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.469064  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.482545  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:07.979446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:06.528938  451943 system_pods.go:86] 6 kube-system pods found
	I0109 00:17:06.528963  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:06.528969  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:06.528973  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:06.528977  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:06.528987  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:06.528994  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:06.529016  451943 retry.go:31] will retry after 11.544909303s: missing components: etcd, kube-apiserver
	I0109 00:17:07.969131  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:09.969180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:10.479061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.480724  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.978853  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.468823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.469027  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:16.968659  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.081528  451943 system_pods.go:86] 8 kube-system pods found
	I0109 00:17:18.081568  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:18.081576  451943 system_pods.go:89] "etcd-old-k8s-version-003293" [f4516e0b-a960-4dc1-85c3-ae8197ded761] Running
	I0109 00:17:18.081583  451943 system_pods.go:89] "kube-apiserver-old-k8s-version-003293" [c5e83fe4-e95d-47ec-86a4-0615095ef746] Running
	I0109 00:17:18.081590  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:18.081596  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:18.081603  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:18.081613  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:18.081622  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:18.081636  451943 system_pods.go:126] duration metric: took 56.786133323s to wait for k8s-apps to be running ...
	I0109 00:17:18.081651  451943 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:17:18.081726  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:17:18.103798  451943 system_svc.go:56] duration metric: took 22.127635ms WaitForService to wait for kubelet.
	I0109 00:17:18.103844  451943 kubeadm.go:581] duration metric: took 1m7.273361806s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:17:18.103879  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:17:18.107740  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:17:18.107768  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:17:18.107803  451943 node_conditions.go:105] duration metric: took 3.918349ms to run NodePressure ...
	I0109 00:17:18.107814  451943 start.go:228] waiting for startup goroutines ...
	I0109 00:17:18.107826  451943 start.go:233] waiting for cluster config update ...
	I0109 00:17:18.107838  451943 start.go:242] writing updated cluster config ...
	I0109 00:17:18.108179  451943 ssh_runner.go:195] Run: rm -f paused
	I0109 00:17:18.161701  451943 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0109 00:17:18.163722  451943 out.go:177] 
	W0109 00:17:18.165269  451943 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0109 00:17:18.166781  451943 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0109 00:17:18.168422  451943 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-003293" cluster and "default" namespace by default
	I0109 00:17:16.980679  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:19.480507  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.969475  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.471739  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.978721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:24.478734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:23.968125  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:25.968375  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:26.483938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:28.979405  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:27.969238  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:29.969349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.973290  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.479085  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:33.978966  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:34.469294  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.967991  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.478328  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.481642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.970055  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:41.468509  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:40.978336  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:42.979499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:44.980394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:43.471069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:45.969083  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:47.479177  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:49.483109  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:48.469215  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:50.970448  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:51.979138  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:54.479275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:53.469152  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:55.470554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:56.480333  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:58.980818  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:57.968358  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:59.968498  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:01.485721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:03.980131  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:02.468272  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:04.469640  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:06.970010  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:05.981218  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:08.478827  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:09.469651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:11.970360  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:10.979972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:12.980174  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:14.470845  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:16.969297  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:15.479585  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:17.979035  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.979874  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.471447  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:21.473866  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:22.479239  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:24.979662  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:23.969077  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:26.469232  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:27.480054  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:29.978803  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:28.470397  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:30.968399  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:31.979175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:33.982180  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:32.467688  451984 pod_ready.go:81] duration metric: took 4m0.007315063s waiting for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	E0109 00:18:32.467715  451984 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:18:32.467724  451984 pod_ready.go:38] duration metric: took 4m2.010477321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:18:32.467740  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:18:32.467770  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:32.467841  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:32.540539  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.540568  451984 cri.go:89] found id: ""
	I0109 00:18:32.540578  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:32.540633  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.547617  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:32.547712  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:32.593446  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:32.593548  451984 cri.go:89] found id: ""
	I0109 00:18:32.593566  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:32.593622  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.598538  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:32.598630  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:32.641182  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.641217  451984 cri.go:89] found id: ""
	I0109 00:18:32.641227  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:32.641281  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.645529  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:32.645610  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:32.687187  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:32.687222  451984 cri.go:89] found id: ""
	I0109 00:18:32.687233  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:32.687299  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.691477  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:32.691551  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:32.730800  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:32.730834  451984 cri.go:89] found id: ""
	I0109 00:18:32.730853  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:32.730914  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.735372  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:32.735458  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:32.779326  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:32.779355  451984 cri.go:89] found id: ""
	I0109 00:18:32.779384  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:32.779528  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.784366  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:32.784444  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:32.825533  451984 cri.go:89] found id: ""
	I0109 00:18:32.825566  451984 logs.go:284] 0 containers: []
	W0109 00:18:32.825577  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:32.825586  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:32.825657  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:32.871429  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:32.871465  451984 cri.go:89] found id: ""
	I0109 00:18:32.871478  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:32.871546  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.876454  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:32.876483  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.931470  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:32.931518  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.976305  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:32.976344  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:33.421205  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:33.421256  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:33.436706  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:33.436752  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:33.605332  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:33.605369  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:33.653704  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:33.653746  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:33.697440  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:33.697489  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:33.753681  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:33.753728  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:33.798230  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:33.798271  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:33.862054  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:33.862089  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:33.942360  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:33.942549  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:33.965458  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:33.965503  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:34.012430  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012465  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:34.012554  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:34.012575  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:34.012583  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:34.012590  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012596  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:36.480501  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:38.979625  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:41.480903  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:43.978879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:44.014441  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:18:44.031831  451984 api_server.go:72] duration metric: took 4m15.676282348s to wait for apiserver process to appear ...
	I0109 00:18:44.031865  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:18:44.031906  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:44.031966  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:44.077138  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.077163  451984 cri.go:89] found id: ""
	I0109 00:18:44.077172  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:44.077232  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.081831  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:44.081906  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:44.121451  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.121474  451984 cri.go:89] found id: ""
	I0109 00:18:44.121482  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:44.121535  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.126070  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:44.126158  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:44.170657  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:44.170690  451984 cri.go:89] found id: ""
	I0109 00:18:44.170699  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:44.170753  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.175896  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:44.175977  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:44.220851  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.220877  451984 cri.go:89] found id: ""
	I0109 00:18:44.220886  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:44.220937  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.225006  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:44.225094  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:44.270073  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.270107  451984 cri.go:89] found id: ""
	I0109 00:18:44.270118  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:44.270188  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.275153  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:44.275245  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:44.318077  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:44.318111  451984 cri.go:89] found id: ""
	I0109 00:18:44.318122  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:44.318201  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.322475  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:44.322560  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:44.361736  451984 cri.go:89] found id: ""
	I0109 00:18:44.361773  451984 logs.go:284] 0 containers: []
	W0109 00:18:44.361784  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:44.361792  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:44.361864  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:44.404699  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:44.404726  451984 cri.go:89] found id: ""
	I0109 00:18:44.404737  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:44.404803  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.408753  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:44.408777  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.455119  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:44.455162  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.497680  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:44.497721  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:44.548809  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:44.548841  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:44.628959  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:44.629159  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:44.651315  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:44.651388  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:44.666013  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:44.666055  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.716269  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:44.716317  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.762681  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:44.762720  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:45.136682  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:45.136743  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:45.274971  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:45.275023  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:45.323164  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:45.323208  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:45.383823  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:45.383881  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:45.428483  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428516  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:45.428571  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:45.428579  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:45.428588  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:45.428601  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428608  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:45.980484  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:48.483446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:50.980210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:53.480495  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:55.429277  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:18:55.436812  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:18:55.438287  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:18:55.438316  451984 api_server.go:131] duration metric: took 11.40644287s to wait for apiserver health ...
	I0109 00:18:55.438327  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:18:55.438359  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:55.438433  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:55.485627  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:55.485654  451984 cri.go:89] found id: ""
	I0109 00:18:55.485664  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:55.485732  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.490219  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:55.490296  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:55.531890  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:55.531920  451984 cri.go:89] found id: ""
	I0109 00:18:55.531930  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:55.532002  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.536651  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:55.536724  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:55.579859  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:55.579909  451984 cri.go:89] found id: ""
	I0109 00:18:55.579921  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:55.579981  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.584894  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:55.584970  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:55.626833  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:55.626861  451984 cri.go:89] found id: ""
	I0109 00:18:55.626871  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:55.626940  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.631334  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:55.631449  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:55.675805  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:55.675831  451984 cri.go:89] found id: ""
	I0109 00:18:55.675843  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:55.675907  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.680727  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:55.680805  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:55.734757  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:55.734788  451984 cri.go:89] found id: ""
	I0109 00:18:55.734799  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:55.734867  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.739390  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:55.739464  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:55.785683  451984 cri.go:89] found id: ""
	I0109 00:18:55.785720  451984 logs.go:284] 0 containers: []
	W0109 00:18:55.785733  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:55.785741  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:55.785815  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:55.839983  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:55.840010  451984 cri.go:89] found id: ""
	I0109 00:18:55.840018  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:55.840066  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.844870  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:55.844897  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:55.979554  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:55.979600  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:56.023796  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:56.023840  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:56.070463  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:56.070512  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:56.116109  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:56.116142  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:56.505693  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:56.505742  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:56.566638  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:56.566683  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:56.649199  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.649372  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.670766  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:56.670809  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:56.719532  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:56.719574  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:56.763714  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:56.763758  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:56.825271  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:56.825324  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:56.869669  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:56.869717  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:56.890240  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890274  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:56.890355  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:56.890385  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.890395  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.890406  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890415  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:55.481178  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:57.979207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:59.980319  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:02.478816  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:04.478919  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:06.899277  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:19:06.899321  451984 system_pods.go:61] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.899329  451984 system_pods.go:61] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.899334  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.899338  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.899348  451984 system_pods.go:61] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.899352  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.899383  451984 system_pods.go:61] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.899395  451984 system_pods.go:61] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.899414  451984 system_pods.go:74] duration metric: took 11.461075857s to wait for pod list to return data ...
	I0109 00:19:06.899429  451984 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:19:06.903404  451984 default_sa.go:45] found service account: "default"
	I0109 00:19:06.903436  451984 default_sa.go:55] duration metric: took 3.995992ms for default service account to be created ...
	I0109 00:19:06.903448  451984 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:19:06.910497  451984 system_pods.go:86] 8 kube-system pods found
	I0109 00:19:06.910523  451984 system_pods.go:89] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.910528  451984 system_pods.go:89] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.910533  451984 system_pods.go:89] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.910537  451984 system_pods.go:89] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.910541  451984 system_pods.go:89] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.910545  451984 system_pods.go:89] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.910553  451984 system_pods.go:89] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.910558  451984 system_pods.go:89] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.910564  451984 system_pods.go:126] duration metric: took 7.110675ms to wait for k8s-apps to be running ...
	I0109 00:19:06.910571  451984 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:19:06.910616  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:19:06.927621  451984 system_svc.go:56] duration metric: took 17.036468ms WaitForService to wait for kubelet.
	I0109 00:19:06.927654  451984 kubeadm.go:581] duration metric: took 4m38.572113328s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:19:06.927677  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:19:06.931040  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:19:06.931071  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:19:06.931083  451984 node_conditions.go:105] duration metric: took 3.401351ms to run NodePressure ...
	I0109 00:19:06.931095  451984 start.go:228] waiting for startup goroutines ...
	I0109 00:19:06.931101  451984 start.go:233] waiting for cluster config update ...
	I0109 00:19:06.931113  451984 start.go:242] writing updated cluster config ...
	I0109 00:19:06.931454  451984 ssh_runner.go:195] Run: rm -f paused
	I0109 00:19:06.989366  451984 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:19:06.991673  451984 out.go:177] * Done! kubectl is now configured to use "embed-certs-845373" cluster and "default" namespace by default
	I0109 00:19:06.479508  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:08.978313  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:11.482400  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:13.979056  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:16.480908  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:18.481024  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:20.482252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:22.978703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:24.979574  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:26.979620  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:29.478426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:31.478540  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:33.478901  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:35.978875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:36.471149  452237 pod_ready.go:81] duration metric: took 4m0.000060952s waiting for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	E0109 00:19:36.471203  452237 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:19:36.471221  452237 pod_ready.go:38] duration metric: took 4m3.426617855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:19:36.471243  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:19:36.471314  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:36.471400  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:36.539330  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:36.539370  452237 cri.go:89] found id: ""
	I0109 00:19:36.539383  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:36.539446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.544259  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:36.544339  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:36.591395  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:36.591437  452237 cri.go:89] found id: ""
	I0109 00:19:36.591448  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:36.591520  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.596454  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:36.596523  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:36.641041  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:36.641070  452237 cri.go:89] found id: ""
	I0109 00:19:36.641082  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:36.641145  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.645716  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:36.645798  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:36.686577  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:36.686607  452237 cri.go:89] found id: ""
	I0109 00:19:36.686618  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:36.686686  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.690744  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:36.690824  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:36.733504  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:36.733534  452237 cri.go:89] found id: ""
	I0109 00:19:36.733544  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:36.733613  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.738581  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:36.738663  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:36.783280  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:36.783314  452237 cri.go:89] found id: ""
	I0109 00:19:36.783326  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:36.783419  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.788101  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:36.788171  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:36.839094  452237 cri.go:89] found id: ""
	I0109 00:19:36.839124  452237 logs.go:284] 0 containers: []
	W0109 00:19:36.839133  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:36.839139  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:36.839201  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:36.880203  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:36.880236  452237 cri.go:89] found id: ""
	I0109 00:19:36.880247  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:36.880329  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.884703  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:36.884732  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:36.900132  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:36.900175  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:37.044558  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:37.044596  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:37.090555  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:37.090601  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:37.550107  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:37.550164  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:37.608267  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:37.608316  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:37.689186  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:37.689447  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:37.712896  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:37.712958  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:37.766035  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:37.766078  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:37.814072  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:37.814111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:37.858686  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:37.858725  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:37.912616  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:37.912661  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:37.973080  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:37.973129  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:38.016941  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.016989  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:38.017072  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:38.017088  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:38.017101  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:38.017118  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.017128  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:48.018753  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:19:48.040302  452237 api_server.go:72] duration metric: took 4m15.967717255s to wait for apiserver process to appear ...
	I0109 00:19:48.040335  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:19:48.040382  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:48.040539  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:48.105058  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.105084  452237 cri.go:89] found id: ""
	I0109 00:19:48.105095  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:48.105158  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.110067  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:48.110165  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:48.153350  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.153383  452237 cri.go:89] found id: ""
	I0109 00:19:48.153394  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:48.153464  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.158284  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:48.158355  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:48.205447  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.205480  452237 cri.go:89] found id: ""
	I0109 00:19:48.205492  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:48.205572  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.210254  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:48.210353  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:48.253594  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:48.253624  452237 cri.go:89] found id: ""
	I0109 00:19:48.253633  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:48.253700  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.259160  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:48.259229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:48.302358  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.302383  452237 cri.go:89] found id: ""
	I0109 00:19:48.302393  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:48.302446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.308134  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:48.308229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:48.349632  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:48.349656  452237 cri.go:89] found id: ""
	I0109 00:19:48.349664  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:48.349715  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.354626  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:48.354693  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:48.400501  452237 cri.go:89] found id: ""
	I0109 00:19:48.400535  452237 logs.go:284] 0 containers: []
	W0109 00:19:48.400547  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:48.400555  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:48.400626  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:48.444607  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:48.444631  452237 cri.go:89] found id: ""
	I0109 00:19:48.444641  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:48.444710  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.448965  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:48.449000  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:48.496050  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:48.496085  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:48.620778  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:48.620812  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.688155  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:48.688204  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.745755  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:48.745792  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.786141  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:48.786195  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.833422  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:48.833456  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:49.231467  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:49.231508  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:49.315139  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.315313  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.337901  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:49.337942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:49.353452  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:49.353494  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:49.409069  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:49.409111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:49.466267  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:49.466311  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:49.512720  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512762  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:49.512838  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:49.512858  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.512868  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.512882  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512891  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:59.513828  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:19:59.518896  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:19:59.520439  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:19:59.520463  452237 api_server.go:131] duration metric: took 11.480122148s to wait for apiserver health ...
	I0109 00:19:59.520479  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:19:59.520504  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:59.520549  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:59.566636  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:59.566669  452237 cri.go:89] found id: ""
	I0109 00:19:59.566680  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:59.566773  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.570754  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:59.570817  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:59.612286  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:59.612314  452237 cri.go:89] found id: ""
	I0109 00:19:59.612326  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:59.612399  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.618705  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:59.618778  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:59.666381  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:59.666408  452237 cri.go:89] found id: ""
	I0109 00:19:59.666417  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:59.666468  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.672155  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:59.672242  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:59.712973  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:59.712997  452237 cri.go:89] found id: ""
	I0109 00:19:59.713005  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:59.713068  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.717181  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:59.717261  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:59.762121  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:59.762153  452237 cri.go:89] found id: ""
	I0109 00:19:59.762163  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:59.762236  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.766573  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:59.766630  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:59.812202  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:59.812233  452237 cri.go:89] found id: ""
	I0109 00:19:59.812246  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:59.812309  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.817529  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:59.817615  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:59.865373  452237 cri.go:89] found id: ""
	I0109 00:19:59.865402  452237 logs.go:284] 0 containers: []
	W0109 00:19:59.865410  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:59.865417  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:59.865486  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:59.914250  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:59.914273  452237 cri.go:89] found id: ""
	I0109 00:19:59.914283  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:59.914369  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.918360  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:59.918391  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:59.999676  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:59.999875  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.022457  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:20:00.022496  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:20:00.082902  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:20:00.082942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:20:00.127886  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:20:00.127933  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:20:00.168705  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:20:00.168737  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:20:00.554704  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:20:00.554751  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:20:00.604427  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:20:00.604462  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:20:00.618923  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:20:00.618954  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:20:00.747443  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:20:00.747475  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:20:00.802652  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:20:00.802691  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:20:00.849279  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:20:00.849318  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:20:00.887879  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:20:00.887919  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:20:00.951894  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.951928  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:20:00.951999  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:20:00.952011  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:20:00.952019  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.952030  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.952035  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:20:10.962675  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:20:10.962706  452237 system_pods.go:61] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.962711  452237 system_pods.go:61] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.962716  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.962721  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.962725  452237 system_pods.go:61] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.962729  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.962735  452237 system_pods.go:61] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.962740  452237 system_pods.go:61] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.962747  452237 system_pods.go:74] duration metric: took 11.442261888s to wait for pod list to return data ...
	I0109 00:20:10.962755  452237 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:20:10.965782  452237 default_sa.go:45] found service account: "default"
	I0109 00:20:10.965808  452237 default_sa.go:55] duration metric: took 3.046646ms for default service account to be created ...
	I0109 00:20:10.965817  452237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:20:10.972286  452237 system_pods.go:86] 8 kube-system pods found
	I0109 00:20:10.972323  452237 system_pods.go:89] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.972331  452237 system_pods.go:89] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.972340  452237 system_pods.go:89] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.972349  452237 system_pods.go:89] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.972356  452237 system_pods.go:89] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.972366  452237 system_pods.go:89] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.972381  452237 system_pods.go:89] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.972392  452237 system_pods.go:89] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.972406  452237 system_pods.go:126] duration metric: took 6.583119ms to wait for k8s-apps to be running ...
	I0109 00:20:10.972427  452237 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:20:10.972490  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:20:10.992310  452237 system_svc.go:56] duration metric: took 19.873367ms WaitForService to wait for kubelet.
	I0109 00:20:10.992340  452237 kubeadm.go:581] duration metric: took 4m38.919766965s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:20:10.992363  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:20:10.996337  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:20:10.996373  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:20:10.996390  452237 node_conditions.go:105] duration metric: took 4.019869ms to run NodePressure ...
	I0109 00:20:10.996405  452237 start.go:228] waiting for startup goroutines ...
	I0109 00:20:10.996414  452237 start.go:233] waiting for cluster config update ...
	I0109 00:20:10.996429  452237 start.go:242] writing updated cluster config ...
	I0109 00:20:10.996742  452237 ssh_runner.go:195] Run: rm -f paused
	I0109 00:20:11.052916  452237 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0109 00:20:11.055339  452237 out.go:177] * Done! kubectl is now configured to use "no-preload-378213" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:10:06 UTC, ends at Tue 2024-01-09 00:26:20 UTC. --
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.025630083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759980025617474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=a9edeef0-48de-4352-8f76-67dbb566648f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.026391042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e59a369-288f-423e-ad5e-6d0b87117139 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.026490325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e59a369-288f-423e-ad5e-6d0b87117139 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.026675587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e59a369-288f-423e-ad5e-6d0b87117139 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.068917750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ddab02bd-a776-481d-9b06-6f877a5adbf5 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.068973629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ddab02bd-a776-481d-9b06-6f877a5adbf5 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.070132729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8ede1520-44e5-49d6-bdff-7bc5eadd9aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.070557460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759980070543575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=8ede1520-44e5-49d6-bdff-7bc5eadd9aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.071039825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b4738018-a9b3-4e4f-a36b-9d50d42e24f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.071113128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b4738018-a9b3-4e4f-a36b-9d50d42e24f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.071301530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b4738018-a9b3-4e4f-a36b-9d50d42e24f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.112515335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e9c0c490-1380-44c5-be40-f845399f8c48 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.112597921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e9c0c490-1380-44c5-be40-f845399f8c48 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.114262721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7180400f-e5ac-4154-bc40-1ff6537c3992 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.114870875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759980114764913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7180400f-e5ac-4154-bc40-1ff6537c3992 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.115512041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=55374dd8-c612-4a8e-868b-7a540556a8de name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.115589463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=55374dd8-c612-4a8e-868b-7a540556a8de name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.115899384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=55374dd8-c612-4a8e-868b-7a540556a8de name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.154446513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b2c4f6fa-7187-4386-861f-4fee8bfa5bd4 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.154528311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b2c4f6fa-7187-4386-861f-4fee8bfa5bd4 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.156343638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=944d7cce-67f7-46f2-91ed-402366b3aff4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.156855831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704759980156767349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=944d7cce-67f7-46f2-91ed-402366b3aff4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.157843397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cf1742d-bf26-4ded-9cf4-c505d5c3a649 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.157916469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cf1742d-bf26-4ded-9cf4-c505d5c3a649 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:26:20 old-k8s-version-003293 crio[729]: time="2024-01-09 00:26:20.158131093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cf1742d-bf26-4ded-9cf4-c505d5c3a649 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e25cd2c892d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   37b29a7d3bfe3       storage-provisioner
	17dc6ef75c618       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   54d7cb7dd30a2       coredns-5644d7b6d9-8pkqq
	901108dc95db4       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   fdfcaed558b5f       kube-proxy-h8br2
	9435012e8152c       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   0f4694eb54e11       etcd-old-k8s-version-003293
	5374a9cceed08       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   089b0c01eba48       kube-scheduler-old-k8s-version-003293
	ef679dd71c7bb       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   6b4c05a9ceacd       kube-controller-manager-old-k8s-version-003293
	bfc228c8d35e5       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   33a3e2bd44491       kube-apiserver-old-k8s-version-003293
	545c3df0e504b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   33a3e2bd44491       kube-apiserver-old-k8s-version-003293
	
	
	==> coredns [17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a] <==
	.:53
	2024-01-09T00:16:12.824Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-09T00:16:12.824Z [INFO] CoreDNS-1.6.2
	2024-01-09T00:16:12.824Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-09T00:16:13.858Z [INFO] 127.0.0.1:33319 - 41703 "HINFO IN 8110508765458628312.3799816984617018093. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034938749s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-003293
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-003293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=old-k8s-version-003293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:25:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:25:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:25:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:25:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.81
	  Hostname:    old-k8s-version-003293
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 49f6c1a6bb44454db83294b9ea7b39ff
	 System UUID:                49f6c1a6-bb44-454d-b832-94b9ea7b39ff
	 Boot ID:                    f192b0ec-7f75-483f-b3ee-d655d1b3cb77
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-8pkqq                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-003293                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                kube-apiserver-old-k8s-version-003293             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                kube-controller-manager-old-k8s-version-003293    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                kube-proxy-h8br2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-003293             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                metrics-server-74d5856cc6-xdjs4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-003293     Node old-k8s-version-003293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-003293     Node old-k8s-version-003293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-003293     Node old-k8s-version-003293 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-003293  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 9 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077324] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan 9 00:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.539993] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145362] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.539596] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000010] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.943318] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.130020] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.178463] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.123133] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.240760] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +20.081351] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.450304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 9 00:11] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 9 00:15] systemd-fstab-generator[3100]: Ignoring "noauto" for root device
	[  +1.777811] kauditd_printk_skb: 8 callbacks suppressed
	[Jan 9 00:16] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232] <==
	2024-01-09 00:15:44.761581 I | raft: c388cf4f1b00fa7 became follower at term 0
	2024-01-09 00:15:44.761602 I | raft: newRaft c388cf4f1b00fa7 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-09 00:15:44.761617 I | raft: c388cf4f1b00fa7 became follower at term 1
	2024-01-09 00:15:44.772274 W | auth: simple token is not cryptographically signed
	2024-01-09 00:15:44.778613 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-09 00:15:44.781956 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-09 00:15:44.782186 I | embed: listening for metrics on http://192.168.72.81:2381
	2024-01-09 00:15:44.782498 I | etcdserver: c388cf4f1b00fa7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-09 00:15:44.783225 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-09 00:15:44.783611 I | etcdserver/membership: added member c388cf4f1b00fa7 [https://192.168.72.81:2380] to cluster eabbb2578081711e
	2024-01-09 00:15:45.562126 I | raft: c388cf4f1b00fa7 is starting a new election at term 1
	2024-01-09 00:15:45.562158 I | raft: c388cf4f1b00fa7 became candidate at term 2
	2024-01-09 00:15:45.562168 I | raft: c388cf4f1b00fa7 received MsgVoteResp from c388cf4f1b00fa7 at term 2
	2024-01-09 00:15:45.562177 I | raft: c388cf4f1b00fa7 became leader at term 2
	2024-01-09 00:15:45.562182 I | raft: raft.node: c388cf4f1b00fa7 elected leader c388cf4f1b00fa7 at term 2
	2024-01-09 00:15:45.563103 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-09 00:15:45.563168 I | embed: ready to serve client requests
	2024-01-09 00:15:45.563331 I | etcdserver: published {Name:old-k8s-version-003293 ClientURLs:[https://192.168.72.81:2379]} to cluster eabbb2578081711e
	2024-01-09 00:15:45.563975 I | embed: ready to serve client requests
	2024-01-09 00:15:45.565136 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-09 00:15:45.566994 I | embed: serving client requests on 192.168.72.81:2379
	2024-01-09 00:15:45.567975 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-09 00:15:45.568044 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-09 00:25:45.582106 I | mvcc: store.index: compact 668
	2024-01-09 00:25:45.586069 I | mvcc: finished scheduled compaction at 668 (took 3.306474ms)
	
	
	==> kernel <==
	 00:26:20 up 16 min,  0 users,  load average: 0.10, 0.19, 0.17
	Linux old-k8s-version-003293 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682] <==
	W0109 00:15:38.631617       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.632209       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.632484       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.633343       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.633398       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.633598       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.634940       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635140       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635281       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635322       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635762       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635830       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635942       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.636499       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.636593       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.636883       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.637023       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.637198       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.637284       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638173       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638231       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638254       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638286       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:39.912156       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:39.920146       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63] <==
	I0109 00:19:13.275962       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:19:13.276062       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:19:13.276149       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:19:13.276160       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:20:49.944629       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:20:49.944830       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:20:49.944924       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:20:49.944936       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:21:49.945391       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:21:49.945534       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:21:49.945602       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:21:49.945620       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:23:49.946034       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:23:49.946433       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:23:49.946529       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:23:49.946564       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:25:49.948451       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:25:49.948575       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:25:49.948666       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:25:49.948674       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4] <==
	E0109 00:20:12.013076       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:20:26.061125       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:20:42.265183       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:20:58.063426       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:21:12.517418       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:21:30.065923       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:21:42.769462       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:22:02.068499       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:22:13.021634       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:22:34.070526       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:22:43.273996       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:23:06.072770       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:23:13.526717       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:23:38.075267       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:23:43.779405       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:24:10.077745       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:24:14.031355       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:24:42.080103       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:24:44.283281       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:25:14.082358       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:25:14.535234       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0109 00:25:44.786988       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:25:46.084446       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:26:15.039013       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:26:18.086913       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d] <==
	W0109 00:16:12.407459       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0109 00:16:12.418543       1 node.go:135] Successfully retrieved node IP: 192.168.72.81
	I0109 00:16:12.418619       1 server_others.go:149] Using iptables Proxier.
	I0109 00:16:12.419006       1 server.go:529] Version: v1.16.0
	I0109 00:16:12.425199       1 config.go:313] Starting service config controller
	I0109 00:16:12.425290       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0109 00:16:12.425338       1 config.go:131] Starting endpoints config controller
	I0109 00:16:12.425360       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0109 00:16:12.527119       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0109 00:16:12.527134       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47] <==
	I0109 00:15:48.945833       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0109 00:15:48.975469       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:48.985138       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:15:49.004582       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:15:49.010167       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:15:49.010339       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:15:49.010395       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:15:49.010433       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:15:49.010465       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:15:49.011181       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:15:49.011384       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:49.011690       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:15:49.977868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:49.988200       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:15:50.006132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:15:50.011967       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:15:50.016013       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:15:50.017933       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:15:50.020171       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:15:50.022377       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:15:50.024051       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:15:50.025343       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:50.026221       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:16:09.509171       1 factory.go:585] pod is already present in the activeQ
	E0109 00:16:09.534150       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:10:06 UTC, ends at Tue 2024-01-09 00:26:20 UTC. --
	Jan 09 00:21:56 old-k8s-version-003293 kubelet[3118]: E0109 00:21:56.195395    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:22:11 old-k8s-version-003293 kubelet[3118]: E0109 00:22:11.207769    3118 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:22:11 old-k8s-version-003293 kubelet[3118]: E0109 00:22:11.207914    3118 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:22:11 old-k8s-version-003293 kubelet[3118]: E0109 00:22:11.207961    3118 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:22:11 old-k8s-version-003293 kubelet[3118]: E0109 00:22:11.207990    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 09 00:22:26 old-k8s-version-003293 kubelet[3118]: E0109 00:22:26.197034    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:22:37 old-k8s-version-003293 kubelet[3118]: E0109 00:22:37.196160    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:22:49 old-k8s-version-003293 kubelet[3118]: E0109 00:22:49.196569    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:23:04 old-k8s-version-003293 kubelet[3118]: E0109 00:23:04.195525    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:23:18 old-k8s-version-003293 kubelet[3118]: E0109 00:23:18.195840    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:23:33 old-k8s-version-003293 kubelet[3118]: E0109 00:23:33.195932    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:23:46 old-k8s-version-003293 kubelet[3118]: E0109 00:23:46.196224    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:23:59 old-k8s-version-003293 kubelet[3118]: E0109 00:23:59.195979    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:24:12 old-k8s-version-003293 kubelet[3118]: E0109 00:24:12.196552    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:24:27 old-k8s-version-003293 kubelet[3118]: E0109 00:24:27.195729    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:24:38 old-k8s-version-003293 kubelet[3118]: E0109 00:24:38.196046    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:24:51 old-k8s-version-003293 kubelet[3118]: E0109 00:24:51.196412    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:04 old-k8s-version-003293 kubelet[3118]: E0109 00:25:04.195920    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:16 old-k8s-version-003293 kubelet[3118]: E0109 00:25:16.195689    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:28 old-k8s-version-003293 kubelet[3118]: E0109 00:25:28.195423    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:39 old-k8s-version-003293 kubelet[3118]: E0109 00:25:39.195967    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:42 old-k8s-version-003293 kubelet[3118]: E0109 00:25:42.284179    3118 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 09 00:25:53 old-k8s-version-003293 kubelet[3118]: E0109 00:25:53.195628    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:05 old-k8s-version-003293 kubelet[3118]: E0109 00:26:05.195874    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:19 old-k8s-version-003293 kubelet[3118]: E0109 00:26:19.196756    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a] <==
	I0109 00:16:13.104836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:16:13.123900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:16:13.124176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:16:13.137182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:16:13.138271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4fa71a0-98bf-489e-a78f-c5ca48fc8f89", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-003293_d0403520-5de8-4cde-b25f-e79b49df3098 became leader
	I0109 00:16:13.142528       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-003293_d0403520-5de8-4cde-b25f-e79b49df3098!
	I0109 00:16:13.243130       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-003293_d0403520-5de8-4cde-b25f-e79b49df3098!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003293 -n old-k8s-version-003293
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-003293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-xdjs4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-003293 describe pod metrics-server-74d5856cc6-xdjs4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-003293 describe pod metrics-server-74d5856cc6-xdjs4: exit status 1 (71.388682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-xdjs4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-003293 describe pod metrics-server-74d5856cc6-xdjs4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:19:19.627717  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:19:22.327187  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:19:40.116427  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:19:59.059019  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845373 -n embed-certs-845373
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:28:07.622805969 +0000 UTC m=+5786.499756178
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-845373 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-845373 logs -n 25: (1.778944253s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo cat                              | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:05:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:05:27.711531  452488 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:05:27.711728  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711742  452488 out.go:309] Setting ErrFile to fd 2...
	I0109 00:05:27.711750  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711982  452488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:05:27.712562  452488 out.go:303] Setting JSON to false
	I0109 00:05:27.713635  452488 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17254,"bootTime":1704741474,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:05:27.713709  452488 start.go:138] virtualization: kvm guest
	I0109 00:05:27.716110  452488 out.go:177] * [default-k8s-diff-port-834116] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:05:27.718021  452488 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:05:27.719311  452488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:05:27.718049  452488 notify.go:220] Checking for updates...
	I0109 00:05:27.720754  452488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:05:27.722073  452488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:05:27.723496  452488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:05:27.724923  452488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:05:27.726663  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:05:27.727158  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.727261  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.741812  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0109 00:05:27.742300  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.742911  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.742943  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.743249  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.743438  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.743694  452488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:05:27.743987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.744027  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.758231  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I0109 00:05:27.758620  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.759039  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.759069  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.759349  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.759570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.797687  452488 out.go:177] * Using the kvm2 driver based on existing profile
	I0109 00:05:27.799282  452488 start.go:298] selected driver: kvm2
	I0109 00:05:27.799301  452488 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.799485  452488 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:05:27.800156  452488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.800240  452488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:05:27.815851  452488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:05:27.816303  452488 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:05:27.816371  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:05:27.816384  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:05:27.816406  452488 start_flags.go:323] config:
	{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-83411
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.816592  452488 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.818643  452488 out.go:177] * Starting control plane node default-k8s-diff-port-834116 in cluster default-k8s-diff-port-834116
	I0109 00:05:30.179677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:27.820207  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:05:27.820246  452488 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0109 00:05:27.820258  452488 cache.go:56] Caching tarball of preloaded images
	I0109 00:05:27.820344  452488 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:05:27.820354  452488 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:05:27.820455  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:05:27.820632  452488 start.go:365] acquiring machines lock for default-k8s-diff-port-834116: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:05:33.251703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:39.331707  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:42.403645  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:48.483635  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:51.555692  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:57.635653  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:00.707722  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:06.787696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:09.859664  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:15.939733  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:19.011687  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:25.091759  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:28.163666  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:34.243673  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:37.315693  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:43.395652  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:46.467622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:52.547639  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:55.619655  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:01.699734  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:04.771686  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:10.851703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:13.923711  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:20.003883  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:23.075726  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:29.155735  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:32.227698  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:38.307696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:41.379724  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:47.459727  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:50.531708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:56.611621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:59.683677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:05.763622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:08.835708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:14.915674  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:17.987706  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:24.067730  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:27.139621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:33.219667  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:36.291651  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:42.371678  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:45.443660  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:48.448024  451984 start.go:369] acquired machines lock for "embed-certs-845373" in 4m36.156097213s
	I0109 00:08:48.448197  451984 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:08:48.448239  451984 fix.go:54] fixHost starting: 
	I0109 00:08:48.448769  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:08:48.448810  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:08:48.464359  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0109 00:08:48.465014  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:08:48.465634  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:08:48.465669  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:08:48.466022  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:08:48.466241  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:08:48.466431  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:08:48.468132  451984 fix.go:102] recreateIfNeeded on embed-certs-845373: state=Stopped err=<nil>
	I0109 00:08:48.468162  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	W0109 00:08:48.468339  451984 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:08:48.470346  451984 out.go:177] * Restarting existing kvm2 VM for "embed-certs-845373" ...
	I0109 00:08:48.445374  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:08:48.445415  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:08:48.447757  451943 machine.go:91] provisioned docker machine in 4m37.407825673s
	I0109 00:08:48.447823  451943 fix.go:56] fixHost completed within 4m37.428599196s
	I0109 00:08:48.447831  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 4m37.428619873s
	W0109 00:08:48.447876  451943 start.go:694] error starting host: provision: host is not running
	W0109 00:08:48.448289  451943 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0109 00:08:48.448305  451943 start.go:709] Will try again in 5 seconds ...
	I0109 00:08:48.471819  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Start
	I0109 00:08:48.471966  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring networks are active...
	I0109 00:08:48.472753  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network default is active
	I0109 00:08:48.473111  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network mk-embed-certs-845373 is active
	I0109 00:08:48.473441  451984 main.go:141] libmachine: (embed-certs-845373) Getting domain xml...
	I0109 00:08:48.474114  451984 main.go:141] libmachine: (embed-certs-845373) Creating domain...
	I0109 00:08:49.716628  451984 main.go:141] libmachine: (embed-certs-845373) Waiting to get IP...
	I0109 00:08:49.717606  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.718022  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.718080  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.717994  452995 retry.go:31] will retry after 247.787821ms: waiting for machine to come up
	I0109 00:08:49.967655  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.968169  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.968203  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.968101  452995 retry.go:31] will retry after 339.65094ms: waiting for machine to come up
	I0109 00:08:50.309542  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.310008  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.310041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.309944  452995 retry.go:31] will retry after 475.654088ms: waiting for machine to come up
	I0109 00:08:50.787560  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.787930  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.787973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.787876  452995 retry.go:31] will retry after 437.198744ms: waiting for machine to come up
	I0109 00:08:51.226414  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.226866  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.226901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.226817  452995 retry.go:31] will retry after 501.606265ms: waiting for machine to come up
	I0109 00:08:51.730571  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.731041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.731084  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.730949  452995 retry.go:31] will retry after 707.547375ms: waiting for machine to come up
	I0109 00:08:53.450389  451943 start.go:365] acquiring machines lock for old-k8s-version-003293: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:08:52.440038  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:52.440373  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:52.440434  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:52.440330  452995 retry.go:31] will retry after 1.02016439s: waiting for machine to come up
	I0109 00:08:53.462628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:53.463090  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:53.463120  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:53.463037  452995 retry.go:31] will retry after 1.322196175s: waiting for machine to come up
	I0109 00:08:54.786979  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:54.787514  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:54.787540  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:54.787465  452995 retry.go:31] will retry after 1.260135214s: waiting for machine to come up
	I0109 00:08:56.049973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:56.050450  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:56.050478  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:56.050415  452995 retry.go:31] will retry after 1.476819521s: waiting for machine to come up
	I0109 00:08:57.529060  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:57.529497  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:57.529527  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:57.529444  452995 retry.go:31] will retry after 2.830903204s: waiting for machine to come up
	I0109 00:09:00.362901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:00.363333  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:00.363372  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:00.363292  452995 retry.go:31] will retry after 3.093040214s: waiting for machine to come up
	I0109 00:09:03.460541  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:03.461066  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:03.461103  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:03.461032  452995 retry.go:31] will retry after 3.190401984s: waiting for machine to come up
	I0109 00:09:06.654729  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655295  451984 main.go:141] libmachine: (embed-certs-845373) Found IP for machine: 192.168.50.132
	I0109 00:09:06.655331  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has current primary IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655343  451984 main.go:141] libmachine: (embed-certs-845373) Reserving static IP address...
	I0109 00:09:06.655828  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.655851  451984 main.go:141] libmachine: (embed-certs-845373) DBG | skip adding static IP to network mk-embed-certs-845373 - found existing host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"}
	I0109 00:09:06.655865  451984 main.go:141] libmachine: (embed-certs-845373) Reserved static IP address: 192.168.50.132
	I0109 00:09:06.655880  451984 main.go:141] libmachine: (embed-certs-845373) Waiting for SSH to be available...
	I0109 00:09:06.655969  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Getting to WaitForSSH function...
	I0109 00:09:06.658083  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658468  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.658501  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658615  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH client type: external
	I0109 00:09:06.658650  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa (-rw-------)
	I0109 00:09:06.658704  451984 main.go:141] libmachine: (embed-certs-845373) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:06.658725  451984 main.go:141] libmachine: (embed-certs-845373) DBG | About to run SSH command:
	I0109 00:09:06.658741  451984 main.go:141] libmachine: (embed-certs-845373) DBG | exit 0
	I0109 00:09:06.751337  451984 main.go:141] libmachine: (embed-certs-845373) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:06.751683  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetConfigRaw
	I0109 00:09:06.752338  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:06.754749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755133  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.755161  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755475  451984 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/config.json ...
	I0109 00:09:06.755689  451984 machine.go:88] provisioning docker machine ...
	I0109 00:09:06.755710  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:06.755939  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756108  451984 buildroot.go:166] provisioning hostname "embed-certs-845373"
	I0109 00:09:06.756133  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756287  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.758391  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758651  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.758678  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758821  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.759026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759151  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759276  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.759419  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.759891  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.759906  451984 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-845373 && echo "embed-certs-845373" | sudo tee /etc/hostname
	I0109 00:09:06.897829  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845373
	
	I0109 00:09:06.897862  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.900776  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901151  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.901194  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901354  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.901601  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901767  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.902093  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.902429  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.902457  451984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-845373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-845373/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-845373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:07.035051  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:07.035088  451984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:07.035106  451984 buildroot.go:174] setting up certificates
	I0109 00:09:07.035141  451984 provision.go:83] configureAuth start
	I0109 00:09:07.035150  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:07.035470  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.038830  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039185  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.039216  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039473  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.041628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.041978  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.042006  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.042138  451984 provision.go:138] copyHostCerts
	I0109 00:09:07.042215  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:07.042235  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:07.042301  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:07.042386  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:07.042394  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:07.042420  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:07.042547  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:07.042557  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:07.042582  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:07.042658  451984 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.embed-certs-845373 san=[192.168.50.132 192.168.50.132 localhost 127.0.0.1 minikube embed-certs-845373]
	I0109 00:09:07.146928  451984 provision.go:172] copyRemoteCerts
	I0109 00:09:07.147000  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:07.147026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.149665  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.149999  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.150025  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.150190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.150402  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.150624  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.150778  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.912619  452237 start.go:369] acquired machines lock for "no-preload-378213" in 4m22.586847609s
	I0109 00:09:07.912695  452237 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:07.912705  452237 fix.go:54] fixHost starting: 
	I0109 00:09:07.913160  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:07.913205  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:07.929558  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0109 00:09:07.930071  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:07.930620  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:09:07.930646  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:07.931015  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:07.931232  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:07.931421  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:09:07.933075  452237 fix.go:102] recreateIfNeeded on no-preload-378213: state=Stopped err=<nil>
	I0109 00:09:07.933114  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	W0109 00:09:07.933281  452237 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:07.935418  452237 out.go:177] * Restarting existing kvm2 VM for "no-preload-378213" ...
	I0109 00:09:07.246432  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:07.270463  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0109 00:09:07.294094  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:09:07.317414  451984 provision.go:86] duration metric: configureAuth took 282.256583ms
	I0109 00:09:07.317462  451984 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:07.317651  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:07.317743  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.320246  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320529  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.320557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320724  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.320930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321068  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321199  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.321480  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.321807  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.321831  451984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:07.649960  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:07.649991  451984 machine.go:91] provisioned docker machine in 894.285072ms
	I0109 00:09:07.650005  451984 start.go:300] post-start starting for "embed-certs-845373" (driver="kvm2")
	I0109 00:09:07.650020  451984 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:07.650052  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.650505  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:07.650537  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.653343  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653671  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.653695  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653913  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.654147  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.654345  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.654548  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.745211  451984 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:07.749547  451984 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:07.749608  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:07.749694  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:07.749790  451984 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:07.749906  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:07.758232  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:07.781504  451984 start.go:303] post-start completed in 131.476813ms
	I0109 00:09:07.781532  451984 fix.go:56] fixHost completed within 19.333293059s
	I0109 00:09:07.781556  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.784365  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.784751  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.784774  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.785021  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.785267  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785430  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785570  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.785745  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.786073  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.786085  451984 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:07.912423  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758947.859859847
	
	I0109 00:09:07.912452  451984 fix.go:206] guest clock: 1704758947.859859847
	I0109 00:09:07.912462  451984 fix.go:219] Guest: 2024-01-09 00:09:07.859859847 +0000 UTC Remote: 2024-01-09 00:09:07.781536446 +0000 UTC m=+295.641408793 (delta=78.323401ms)
	I0109 00:09:07.912487  451984 fix.go:190] guest clock delta is within tolerance: 78.323401ms
	I0109 00:09:07.912494  451984 start.go:83] releasing machines lock for "embed-certs-845373", held for 19.464424699s
	I0109 00:09:07.912529  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.912827  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.915749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916146  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.916177  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916358  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.916865  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917042  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917155  451984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:07.917208  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.917263  451984 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:07.917288  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.920121  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920158  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920573  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920608  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920626  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920648  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920703  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920858  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920942  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921034  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921122  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921185  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921263  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.921282  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:08.040953  451984 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:08.046882  451984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:08.204801  451984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:08.214653  451984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:08.214741  451984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:08.232714  451984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:08.232750  451984 start.go:475] detecting cgroup driver to use...
	I0109 00:09:08.232881  451984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:08.254408  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:08.266926  451984 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:08.267015  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:08.278971  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:08.291982  451984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:08.395029  451984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:08.514444  451984 docker.go:219] disabling docker service ...
	I0109 00:09:08.514527  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:08.528548  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:08.540899  451984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:08.669118  451984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:08.776487  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:08.791617  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:08.809437  451984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:08.809509  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.818817  451984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:08.818891  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.828374  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.839820  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.849449  451984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:08.858899  451984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:08.869295  451984 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:08.869377  451984 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:08.885387  451984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:08.895106  451984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:09.007897  451984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:09.197656  451984 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:09.197737  451984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:09.203174  451984 start.go:543] Will wait 60s for crictl version
	I0109 00:09:09.203264  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:09:09.207312  451984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:09.245917  451984 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:09.245996  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.296410  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.345334  451984 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:07.937023  452237 main.go:141] libmachine: (no-preload-378213) Calling .Start
	I0109 00:09:07.937229  452237 main.go:141] libmachine: (no-preload-378213) Ensuring networks are active...
	I0109 00:09:07.938093  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network default is active
	I0109 00:09:07.938504  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network mk-no-preload-378213 is active
	I0109 00:09:07.938868  452237 main.go:141] libmachine: (no-preload-378213) Getting domain xml...
	I0109 00:09:07.939609  452237 main.go:141] libmachine: (no-preload-378213) Creating domain...
	I0109 00:09:09.254019  452237 main.go:141] libmachine: (no-preload-378213) Waiting to get IP...
	I0109 00:09:09.254967  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.255375  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.255465  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.255333  453115 retry.go:31] will retry after 260.636384ms: waiting for machine to come up
	I0109 00:09:09.518054  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.518563  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.518590  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.518522  453115 retry.go:31] will retry after 320.770806ms: waiting for machine to come up
	I0109 00:09:09.841203  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.841675  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.841710  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.841604  453115 retry.go:31] will retry after 317.226014ms: waiting for machine to come up
	I0109 00:09:10.160137  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.160545  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.160576  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.160522  453115 retry.go:31] will retry after 452.723717ms: waiting for machine to come up
	I0109 00:09:09.346886  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:09.350050  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350407  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:09.350440  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350626  451984 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:09.354884  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:09.367669  451984 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:09.367765  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:09.407793  451984 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:09.407876  451984 ssh_runner.go:195] Run: which lz4
	I0109 00:09:09.412172  451984 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:09.416303  451984 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:09.416331  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:11.408967  451984 crio.go:444] Took 1.996823 seconds to copy over tarball
	I0109 00:09:11.409067  451984 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:10.615452  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.615971  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.615999  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.615922  453115 retry.go:31] will retry after 555.714359ms: waiting for machine to come up
	I0109 00:09:11.173767  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:11.174269  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:11.174301  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:11.174220  453115 retry.go:31] will retry after 843.630815ms: waiting for machine to come up
	I0109 00:09:12.019354  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:12.019896  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:12.019962  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:12.019884  453115 retry.go:31] will retry after 1.083324701s: waiting for machine to come up
	I0109 00:09:13.104954  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:13.105499  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:13.105535  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:13.105442  453115 retry.go:31] will retry after 1.445208328s: waiting for machine to come up
	I0109 00:09:14.552723  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:14.553247  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:14.553278  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:14.553202  453115 retry.go:31] will retry after 1.207345182s: waiting for machine to come up
	I0109 00:09:14.301519  451984 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.892406004s)
	I0109 00:09:14.301567  451984 crio.go:451] Took 2.892564 seconds to extract the tarball
	I0109 00:09:14.301579  451984 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:09:14.344103  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:14.399048  451984 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:09:14.399072  451984 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:09:14.399160  451984 ssh_runner.go:195] Run: crio config
	I0109 00:09:14.459603  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:14.459643  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:14.459693  451984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:14.459752  451984 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-845373 NodeName:embed-certs-845373 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:14.460006  451984 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-845373"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:14.460098  451984 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-845373 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:14.460176  451984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:09:14.469269  451984 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:14.469363  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:14.479156  451984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0109 00:09:14.496058  451984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:09:14.513299  451984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:09:14.530721  451984 ssh_runner.go:195] Run: grep 192.168.50.132	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:14.534849  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:14.546999  451984 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373 for IP: 192.168.50.132
	I0109 00:09:14.547045  451984 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:14.547259  451984 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:14.547310  451984 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:14.547456  451984 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/client.key
	I0109 00:09:14.547531  451984 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key.073edd3d
	I0109 00:09:14.547584  451984 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key
	I0109 00:09:14.547733  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:14.547770  451984 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:14.547778  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:14.547803  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:14.547822  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:14.547851  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:14.547891  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:14.548888  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:14.574032  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:14.599543  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:14.625213  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:09:14.650001  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:14.675008  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:14.699179  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:14.722451  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:14.746559  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:14.769631  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:14.792906  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:14.815748  451984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:14.832389  451984 ssh_runner.go:195] Run: openssl version
	I0109 00:09:14.840602  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:14.856001  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862098  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862187  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.868184  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:14.879131  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:14.890092  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894911  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894969  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.900490  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:14.912056  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:14.923126  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.927937  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.928024  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.933646  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:14.944658  451984 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:14.949507  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:14.956040  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:14.962180  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:14.968224  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:14.974087  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:14.980079  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:14.986029  451984 kubeadm.go:404] StartCluster: {Name:embed-certs-845373 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:14.986148  451984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:14.986202  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:15.027950  451984 cri.go:89] found id: ""
	I0109 00:09:15.028035  451984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:15.039282  451984 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:15.039314  451984 kubeadm.go:636] restartCluster start
	I0109 00:09:15.039430  451984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:15.049695  451984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.050930  451984 kubeconfig.go:92] found "embed-certs-845373" server: "https://192.168.50.132:8443"
	I0109 00:09:15.053805  451984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:15.064953  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.065018  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.078921  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.565496  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.565626  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.578601  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.065133  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.065227  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.077749  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.565317  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.565425  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.578351  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:17.065861  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.065998  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.078781  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.762565  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:15.762982  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:15.763010  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:15.762909  453115 retry.go:31] will retry after 2.319709932s: waiting for machine to come up
	I0109 00:09:18.083780  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:18.084295  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:18.084330  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:18.084224  453115 retry.go:31] will retry after 2.101421106s: waiting for machine to come up
	I0109 00:09:20.188389  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:20.188770  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:20.188804  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:20.188712  453115 retry.go:31] will retry after 2.578747646s: waiting for machine to come up
	I0109 00:09:17.565567  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.565690  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.578496  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.065006  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.065120  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.078249  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.565568  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.565732  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.582691  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.065249  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.065353  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.082433  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.564998  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.565129  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.582026  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.065462  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.065563  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.079586  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.565150  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.565253  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.581576  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.065135  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.065246  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.080231  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.565856  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.566034  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.582980  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.065130  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.065245  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.078868  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.769370  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:22.769835  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:22.769877  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:22.769775  453115 retry.go:31] will retry after 4.446013118s: waiting for machine to come up
	I0109 00:09:22.565774  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.565850  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.581869  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.065381  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.065511  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.078260  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.565069  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.565171  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.577588  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.065102  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.065184  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.077356  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.565990  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.566090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.578416  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.065960  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:25.066090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:25.078618  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.078652  451984 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:25.078665  451984 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:25.078689  451984 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:25.078759  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:25.117213  451984 cri.go:89] found id: ""
	I0109 00:09:25.117304  451984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:25.133313  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:25.142683  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:25.142755  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152228  451984 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152252  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:25.273216  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.323239  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.049977221s)
	I0109 00:09:26.323274  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.531333  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.605976  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.691914  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:26.692006  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.408538  452488 start.go:369] acquired machines lock for "default-k8s-diff-port-834116" in 4m0.587839533s
	I0109 00:09:28.408614  452488 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:28.408627  452488 fix.go:54] fixHost starting: 
	I0109 00:09:28.409094  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:28.409147  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:28.426990  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0109 00:09:28.427467  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:28.428010  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:09:28.428043  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:28.428413  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:28.428726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:28.428887  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:09:28.430477  452488 fix.go:102] recreateIfNeeded on default-k8s-diff-port-834116: state=Stopped err=<nil>
	I0109 00:09:28.430508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	W0109 00:09:28.430658  452488 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:28.432612  452488 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-834116" ...
	I0109 00:09:27.220872  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221372  452237 main.go:141] libmachine: (no-preload-378213) Found IP for machine: 192.168.61.62
	I0109 00:09:27.221401  452237 main.go:141] libmachine: (no-preload-378213) Reserving static IP address...
	I0109 00:09:27.221416  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has current primary IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221769  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.221820  452237 main.go:141] libmachine: (no-preload-378213) DBG | skip adding static IP to network mk-no-preload-378213 - found existing host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"}
	I0109 00:09:27.221842  452237 main.go:141] libmachine: (no-preload-378213) Reserved static IP address: 192.168.61.62
	I0109 00:09:27.221859  452237 main.go:141] libmachine: (no-preload-378213) Waiting for SSH to be available...
	I0109 00:09:27.221877  452237 main.go:141] libmachine: (no-preload-378213) DBG | Getting to WaitForSSH function...
	I0109 00:09:27.224260  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224609  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.224643  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224762  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH client type: external
	I0109 00:09:27.224792  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa (-rw-------)
	I0109 00:09:27.224822  452237 main.go:141] libmachine: (no-preload-378213) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:27.224832  452237 main.go:141] libmachine: (no-preload-378213) DBG | About to run SSH command:
	I0109 00:09:27.224841  452237 main.go:141] libmachine: (no-preload-378213) DBG | exit 0
	I0109 00:09:27.315335  452237 main.go:141] libmachine: (no-preload-378213) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:27.315823  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetConfigRaw
	I0109 00:09:27.316473  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.319014  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319305  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.319339  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319673  452237 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/config.json ...
	I0109 00:09:27.319916  452237 machine.go:88] provisioning docker machine ...
	I0109 00:09:27.319939  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:27.320167  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320354  452237 buildroot.go:166] provisioning hostname "no-preload-378213"
	I0109 00:09:27.320378  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320575  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.322760  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323156  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.323187  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323317  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.323542  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323711  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323869  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.324061  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.324556  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.324577  452237 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-378213 && echo "no-preload-378213" | sudo tee /etc/hostname
	I0109 00:09:27.452901  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-378213
	
	I0109 00:09:27.452957  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.456295  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456636  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.456693  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456919  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.457140  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457343  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457491  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.457671  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.458159  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.458188  452237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-378213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-378213/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-378213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:27.579589  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:27.579626  452237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:27.579658  452237 buildroot.go:174] setting up certificates
	I0109 00:09:27.579674  452237 provision.go:83] configureAuth start
	I0109 00:09:27.579688  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.580039  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.583100  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583557  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.583592  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583759  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.586482  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.586816  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.586862  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.587019  452237 provision.go:138] copyHostCerts
	I0109 00:09:27.587091  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:27.587105  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:27.587162  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:27.587246  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:27.587256  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:27.587276  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:27.587326  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:27.587333  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:27.587350  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:27.587423  452237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.no-preload-378213 san=[192.168.61.62 192.168.61.62 localhost 127.0.0.1 minikube no-preload-378213]
	I0109 00:09:27.642093  452237 provision.go:172] copyRemoteCerts
	I0109 00:09:27.642159  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:27.642186  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.645245  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645702  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.645736  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645959  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.646180  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.646367  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.646552  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:27.740878  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:09:27.770934  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:27.794548  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:27.819155  452237 provision.go:86] duration metric: configureAuth took 239.463059ms
	I0109 00:09:27.819191  452237 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:27.819452  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:09:27.819556  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.822793  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823249  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.823282  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.823666  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823812  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823943  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.824098  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.824547  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.824575  452237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:28.155878  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:28.155939  452237 machine.go:91] provisioned docker machine in 835.996764ms
	I0109 00:09:28.155955  452237 start.go:300] post-start starting for "no-preload-378213" (driver="kvm2")
	I0109 00:09:28.155975  452237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:28.156002  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.156370  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:28.156408  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.159411  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.159831  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.159863  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.160134  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.160347  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.160553  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.160700  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.249092  452237 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:28.253686  452237 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:28.253721  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:28.253812  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:28.253914  452237 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:28.254042  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:28.262550  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:28.286467  452237 start.go:303] post-start completed in 130.492214ms
	I0109 00:09:28.286497  452237 fix.go:56] fixHost completed within 20.373793038s
	I0109 00:09:28.286527  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.289569  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290022  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.290056  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290374  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.290619  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.290815  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.291040  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.291256  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:28.291770  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:28.291788  452237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:28.408354  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758968.353872845
	
	I0109 00:09:28.408384  452237 fix.go:206] guest clock: 1704758968.353872845
	I0109 00:09:28.408392  452237 fix.go:219] Guest: 2024-01-09 00:09:28.353872845 +0000 UTC Remote: 2024-01-09 00:09:28.286503221 +0000 UTC m=+283.122022206 (delta=67.369624ms)
	I0109 00:09:28.408411  452237 fix.go:190] guest clock delta is within tolerance: 67.369624ms
	I0109 00:09:28.408416  452237 start.go:83] releasing machines lock for "no-preload-378213", held for 20.495748993s
	I0109 00:09:28.408448  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.408745  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:28.411951  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412357  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.412395  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412550  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413258  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413495  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413588  452237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:28.413639  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.414067  452237 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:28.414125  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.416878  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417049  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417271  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417292  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417550  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417710  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.417720  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417771  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417896  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.417935  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.418108  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.418105  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.418226  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.533738  452237 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:28.541801  452237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:28.692517  452237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:28.700384  452237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:28.700455  452237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:28.720264  452237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:28.720300  452237 start.go:475] detecting cgroup driver to use...
	I0109 00:09:28.720376  452237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:28.739758  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:28.755682  452237 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:28.755754  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:28.772178  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:28.792261  452237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:28.908562  452237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:29.042390  452237 docker.go:219] disabling docker service ...
	I0109 00:09:29.042528  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:29.055964  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:29.071788  452237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:29.191963  452237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:29.322608  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:29.336149  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:29.357616  452237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:29.357765  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.372357  452237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:29.372436  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.393266  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.405729  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.417114  452237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:29.428259  452237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:29.440397  452237 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:29.440499  452237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:29.454482  452237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:29.467600  452237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:29.590644  452237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:29.786115  452237 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:29.786205  452237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:29.793049  452237 start.go:543] Will wait 60s for crictl version
	I0109 00:09:29.793129  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:29.798630  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:29.847758  452237 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:29.847850  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.905071  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.963992  452237 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:09:29.965790  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:29.969222  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969638  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:29.969687  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969930  452237 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:29.974709  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:29.989617  452237 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:09:29.989667  452237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:30.034776  452237 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:09:30.034804  452237 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.034911  452237 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0109 00:09:30.034925  452237 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.034948  452237 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.035060  452237 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.034904  452237 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.035172  452237 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036679  452237 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036727  452237 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.036737  452237 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.036788  452237 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0109 00:09:30.036814  452237 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.036730  452237 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.036846  452237 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.036678  452237 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.208127  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:27.192095  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:27.692608  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.192176  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.692194  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.192059  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.219995  451984 api_server.go:72] duration metric: took 2.528085009s to wait for apiserver process to appear ...
	I0109 00:09:29.220032  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:09:29.220058  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:28.434238  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Start
	I0109 00:09:28.434453  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring networks are active...
	I0109 00:09:28.435324  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network default is active
	I0109 00:09:28.435804  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network mk-default-k8s-diff-port-834116 is active
	I0109 00:09:28.436322  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Getting domain xml...
	I0109 00:09:28.437072  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Creating domain...
	I0109 00:09:29.958911  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting to get IP...
	I0109 00:09:29.959938  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960820  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960896  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:29.960822  453241 retry.go:31] will retry after 210.498897ms: waiting for machine to come up
	I0109 00:09:30.173307  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173717  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173752  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.173670  453241 retry.go:31] will retry after 342.664675ms: waiting for machine to come up
	I0109 00:09:30.518442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519113  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.519069  453241 retry.go:31] will retry after 411.240969ms: waiting for machine to come up
	I0109 00:09:30.931762  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932152  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.932104  453241 retry.go:31] will retry after 402.965268ms: waiting for machine to come up
	I0109 00:09:31.336957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337426  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337459  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.337393  453241 retry.go:31] will retry after 626.321347ms: waiting for machine to come up
	I0109 00:09:31.965071  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965632  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965665  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.965592  453241 retry.go:31] will retry after 787.166947ms: waiting for machine to come up
	I0109 00:09:30.217603  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0109 00:09:30.234877  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.243097  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.258262  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.273678  452237 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0109 00:09:30.273761  452237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.273826  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.278909  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.285277  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.289552  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.430758  452237 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0109 00:09:30.430813  452237 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.430866  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.430995  452237 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0109 00:09:30.431023  452237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.431061  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456561  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.456591  452237 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0109 00:09:30.456636  452237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.456690  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456722  452237 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0109 00:09:30.456757  452237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.456791  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456911  452237 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0109 00:09:30.456945  452237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.456976  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.482028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.482298  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.482547  452237 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0109 00:09:30.482694  452237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.482754  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.518754  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.518899  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.518966  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.519317  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0109 00:09:30.519422  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.629846  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630082  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630145  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630189  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630022  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0109 00:09:30.630280  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:30.630028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.657819  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657907  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0109 00:09:30.657966  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657824  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0109 00:09:30.658025  452237 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658053  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:30.658084  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0109 00:09:30.658091  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658142  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0109 00:09:30.658173  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0109 00:09:30.714523  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:30.714654  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:32.867027  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.208889866s)
	I0109 00:09:32.867091  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0109 00:09:32.867107  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.209103985s)
	I0109 00:09:32.867122  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:32.867141  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867187  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.209109716s)
	I0109 00:09:32.867221  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0109 00:09:32.867220  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.15254199s)
	I0109 00:09:32.867251  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867190  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:35.150432  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.283143174s)
	I0109 00:09:35.150478  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0109 00:09:35.150509  452237 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:35.150560  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:34.179483  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.179518  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.179533  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.210742  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.210780  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.220940  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.259813  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.259869  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:34.720337  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.733062  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.733105  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.220599  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.228775  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:35.228814  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.720241  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.725882  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:09:35.736706  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:09:35.736745  451984 api_server.go:131] duration metric: took 6.516702561s to wait for apiserver health ...
	I0109 00:09:35.736790  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:35.736811  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:35.739014  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:09:35.740624  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:09:35.776055  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:09:35.814280  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:09:35.832281  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:09:35.832330  451984 system_pods.go:61] "coredns-5dd5756b68-vkd62" [c676d069-cca7-428c-8eec-026ecea14be2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:09:35.832342  451984 system_pods.go:61] "etcd-embed-certs-845373" [92d4616d-126c-4ee9-9475-9d0c790090c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:09:35.832354  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [9663f585-eca1-4f8f-8a93-aea9b4e98c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:09:35.832368  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [41b4ce59-d838-4798-b593-93c7c8573733] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:09:35.832383  451984 system_pods.go:61] "kube-proxy-tbzpb" [132469d5-d267-4869-ad09-c9fba8d0f9d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:09:35.832398  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [336147ec-8318-496b-986d-55845e7dd9a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:09:35.832408  451984 system_pods.go:61] "metrics-server-57f55c9bc5-2p4js" [c37e24f3-c50b-4169-9d0b-48e21072a114] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:09:35.832421  451984 system_pods.go:61] "storage-provisioner" [e558d9f2-6d92-41d6-82bf-194f53ead52c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:09:35.832436  451984 system_pods.go:74] duration metric: took 18.123808ms to wait for pod list to return data ...
	I0109 00:09:35.832451  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:09:35.836031  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:09:35.836180  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:09:35.836225  451984 node_conditions.go:105] duration metric: took 3.766883ms to run NodePressure ...
	I0109 00:09:35.836250  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:36.192967  451984 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198294  451984 kubeadm.go:787] kubelet initialised
	I0109 00:09:36.198327  451984 kubeadm.go:788] duration metric: took 5.32566ms waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198373  451984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:09:36.205198  451984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:36.230481  451984 pod_ready.go:97] node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230560  451984 pod_ready.go:81] duration metric: took 25.328027ms waiting for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	E0109 00:09:36.230576  451984 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230600  451984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:32.754128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779281  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779328  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:32.754425  453241 retry.go:31] will retry after 781.872506ms: waiting for machine to come up
	I0109 00:09:33.538136  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538643  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:33.538562  453241 retry.go:31] will retry after 1.315575893s: waiting for machine to come up
	I0109 00:09:34.856083  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857209  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857287  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:34.857007  453241 retry.go:31] will retry after 1.252692701s: waiting for machine to come up
	I0109 00:09:36.111647  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112092  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112127  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:36.112042  453241 retry.go:31] will retry after 1.549931798s: waiting for machine to come up
	I0109 00:09:37.664325  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664771  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664841  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:37.664729  453241 retry.go:31] will retry after 2.220936863s: waiting for machine to come up
	I0109 00:09:39.585741  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.435146297s)
	I0109 00:09:39.585853  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0109 00:09:39.585890  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:39.585954  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:38.239319  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:40.240459  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:39.886897  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887409  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887446  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:39.887322  453241 retry.go:31] will retry after 3.125817684s: waiting for machine to come up
	I0109 00:09:42.688186  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.102196226s)
	I0109 00:09:42.688238  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0109 00:09:42.688270  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:42.688333  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:44.144243  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.455874893s)
	I0109 00:09:44.144277  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0109 00:09:44.144322  452237 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:44.144396  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:45.193429  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.048998334s)
	I0109 00:09:45.193464  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0109 00:09:45.193501  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:45.193553  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:42.241597  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:44.740359  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:46.239061  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.239098  451984 pod_ready.go:81] duration metric: took 10.008483597s waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.239112  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244571  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.244598  451984 pod_ready.go:81] duration metric: took 5.476365ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244610  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249839  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.249866  451984 pod_ready.go:81] duration metric: took 5.248385ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249891  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254718  451984 pod_ready.go:92] pod "kube-proxy-tbzpb" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.254742  451984 pod_ready.go:81] duration metric: took 4.843779ms waiting for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254752  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:43.016904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017479  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:43.017386  453241 retry.go:31] will retry after 3.976875386s: waiting for machine to come up
	I0109 00:09:46.996452  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996902  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996937  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:46.996855  453241 retry.go:31] will retry after 5.149738116s: waiting for machine to come up
	I0109 00:09:47.750708  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.557124662s)
	I0109 00:09:47.750737  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0109 00:09:47.750767  452237 cache_images.go:123] Successfully loaded all cached images
	I0109 00:09:47.750773  452237 cache_images.go:92] LoadImages completed in 17.715956149s
	I0109 00:09:47.750871  452237 ssh_runner.go:195] Run: crio config
	I0109 00:09:47.811486  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:09:47.811510  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:47.811535  452237 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:47.811560  452237 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.62 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-378213 NodeName:no-preload-378213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:47.811757  452237 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-378213"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:47.811881  452237 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-378213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:47.811954  452237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:09:47.821353  452237 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:47.821426  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:47.830117  452237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0109 00:09:47.847966  452237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:09:47.865130  452237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0109 00:09:47.881920  452237 ssh_runner.go:195] Run: grep 192.168.61.62	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:47.885907  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:47.899472  452237 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213 for IP: 192.168.61.62
	I0109 00:09:47.899519  452237 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:47.899687  452237 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:47.899729  452237 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:47.899792  452237 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/client.key
	I0109 00:09:47.899854  452237 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key.fe752756
	I0109 00:09:47.899891  452237 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key
	I0109 00:09:47.899991  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:47.900022  452237 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:47.900033  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:47.900056  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:47.900084  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:47.900111  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:47.900176  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:47.900831  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:47.926702  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:47.952472  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:47.977143  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:09:48.001909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:48.028506  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:48.054909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:48.079320  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:48.106719  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:48.133440  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:48.157353  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:48.180860  452237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:48.198490  452237 ssh_runner.go:195] Run: openssl version
	I0109 00:09:48.204240  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:48.214015  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218654  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218717  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.224372  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:48.233922  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:48.243425  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248305  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248381  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.254018  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:48.263791  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:48.273568  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278373  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278438  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.284003  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:48.296358  452237 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:48.301336  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:48.307645  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:48.313470  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:48.319349  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:48.325344  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:48.331352  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:48.337159  452237 kubeadm.go:404] StartCluster: {Name:no-preload-378213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:48.337255  452237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:48.337302  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:48.374150  452237 cri.go:89] found id: ""
	I0109 00:09:48.374229  452237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:48.383627  452237 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:48.383649  452237 kubeadm.go:636] restartCluster start
	I0109 00:09:48.383699  452237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:48.392428  452237 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.393515  452237 kubeconfig.go:92] found "no-preload-378213" server: "https://192.168.61.62:8443"
	I0109 00:09:48.395997  452237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:48.404639  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.404708  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.416205  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.904794  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.904896  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.916391  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.404903  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.405006  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.416469  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.905053  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.905224  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.916621  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.262991  451984 pod_ready.go:102] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:50.262235  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:50.262262  451984 pod_ready.go:81] duration metric: took 4.007503301s waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:50.262275  451984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:52.150891  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151383  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Found IP for machine: 192.168.39.73
	I0109 00:09:52.151416  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserving static IP address...
	I0109 00:09:52.151442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has current primary IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.151943  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | skip adding static IP to network mk-default-k8s-diff-port-834116 - found existing host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"}
	I0109 00:09:52.151966  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserved static IP address: 192.168.39.73
	I0109 00:09:52.152005  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for SSH to be available...
	I0109 00:09:52.152039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Getting to WaitForSSH function...
	I0109 00:09:52.154139  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154471  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.154514  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154642  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH client type: external
	I0109 00:09:52.154672  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa (-rw-------)
	I0109 00:09:52.154701  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:52.154719  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | About to run SSH command:
	I0109 00:09:52.154736  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | exit 0
	I0109 00:09:52.247320  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:52.247704  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetConfigRaw
	I0109 00:09:52.248366  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.251047  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251482  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.251511  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251734  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:09:52.251981  452488 machine.go:88] provisioning docker machine ...
	I0109 00:09:52.252003  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:52.252219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252396  452488 buildroot.go:166] provisioning hostname "default-k8s-diff-port-834116"
	I0109 00:09:52.252418  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252612  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.254861  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255244  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.255276  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255485  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.255657  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255956  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.256111  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.256468  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.256485  452488 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-834116 && echo "default-k8s-diff-port-834116" | sudo tee /etc/hostname
	I0109 00:09:52.392092  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-834116
	
	I0109 00:09:52.392128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.394807  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395260  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.395312  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395539  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.395797  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396091  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396289  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.396464  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.396839  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.396863  452488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-834116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-834116/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-834116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:52.527950  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:52.527981  452488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:52.528006  452488 buildroot.go:174] setting up certificates
	I0109 00:09:52.528021  452488 provision.go:83] configureAuth start
	I0109 00:09:52.528033  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.528365  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.531179  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531597  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.531624  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531763  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.534073  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534480  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.534521  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534650  452488 provision.go:138] copyHostCerts
	I0109 00:09:52.534726  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:52.534737  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:52.534796  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:52.534902  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:52.534912  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:52.534933  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:52.535020  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:52.535027  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:52.535042  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:52.535093  452488 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-834116 san=[192.168.39.73 192.168.39.73 localhost 127.0.0.1 minikube default-k8s-diff-port-834116]
	I0109 00:09:53.636158  451943 start.go:369] acquired machines lock for "old-k8s-version-003293" in 1m0.185697203s
	I0109 00:09:53.636214  451943 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:53.636222  451943 fix.go:54] fixHost starting: 
	I0109 00:09:53.636646  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:53.636682  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:53.654194  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0109 00:09:53.654606  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:53.655203  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:09:53.655227  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:53.655659  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:53.655927  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:09:53.656139  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:09:53.657909  451943 fix.go:102] recreateIfNeeded on old-k8s-version-003293: state=Stopped err=<nil>
	I0109 00:09:53.657934  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	W0109 00:09:53.658135  451943 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:53.660261  451943 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003293" ...
	I0109 00:09:52.872029  452488 provision.go:172] copyRemoteCerts
	I0109 00:09:52.872106  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:52.872134  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.874824  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875218  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.875256  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.875726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.875959  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.876122  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:52.970940  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:52.995353  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0109 00:09:53.019846  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:53.048132  452488 provision.go:86] duration metric: configureAuth took 520.096734ms
	I0109 00:09:53.048166  452488 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:53.048357  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:53.048458  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.051336  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051745  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.051781  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051963  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.052200  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052578  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.052753  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.053273  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.053296  452488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:53.371482  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:53.371519  452488 machine.go:91] provisioned docker machine in 1.119521349s
	I0109 00:09:53.371534  452488 start.go:300] post-start starting for "default-k8s-diff-port-834116" (driver="kvm2")
	I0109 00:09:53.371572  452488 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:53.371601  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.371940  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:53.371968  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.374606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.374999  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.375039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.375242  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.375487  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.375668  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.375823  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.469684  452488 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:53.474184  452488 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:53.474226  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:53.474291  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:53.474375  452488 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:53.474510  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:53.484106  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:53.508477  452488 start.go:303] post-start completed in 136.921252ms
	I0109 00:09:53.508516  452488 fix.go:56] fixHost completed within 25.099889324s
	I0109 00:09:53.508540  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.511508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.511954  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.511993  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.512174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.512412  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512605  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512739  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.512966  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.513304  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.513319  452488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:53.635969  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758993.581588382
	
	I0109 00:09:53.635992  452488 fix.go:206] guest clock: 1704758993.581588382
	I0109 00:09:53.636001  452488 fix.go:219] Guest: 2024-01-09 00:09:53.581588382 +0000 UTC Remote: 2024-01-09 00:09:53.508520878 +0000 UTC m=+265.847432935 (delta=73.067504ms)
	I0109 00:09:53.636037  452488 fix.go:190] guest clock delta is within tolerance: 73.067504ms
	I0109 00:09:53.636042  452488 start.go:83] releasing machines lock for "default-k8s-diff-port-834116", held for 25.227459425s
	I0109 00:09:53.636078  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.636408  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:53.639469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.639957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.639990  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.640149  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640967  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.641079  452488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:53.641126  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.641236  452488 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:53.641263  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.643872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644145  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644230  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644258  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644427  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644519  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644552  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644618  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644698  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644784  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.644850  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644945  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.645012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.645188  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.758973  452488 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:53.765494  452488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:53.913457  452488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:53.921317  452488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:53.921409  452488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:53.937393  452488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:53.937422  452488 start.go:475] detecting cgroup driver to use...
	I0109 00:09:53.937501  452488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:53.954986  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:53.967577  452488 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:53.967661  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:53.981370  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:53.994954  452488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:54.113662  452488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:54.257917  452488 docker.go:219] disabling docker service ...
	I0109 00:09:54.258009  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:54.275330  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:54.287545  452488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:54.413696  452488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:54.534759  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:54.548789  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:54.567131  452488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:54.567209  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.578605  452488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:54.578690  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.588764  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.598290  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.608187  452488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:54.619339  452488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:54.627744  452488 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:54.627810  452488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:54.640572  452488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:54.649169  452488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:54.774028  452488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:54.981035  452488 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:54.981123  452488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:54.986812  452488 start.go:543] Will wait 60s for crictl version
	I0109 00:09:54.986874  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:09:54.991067  452488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:55.026881  452488 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:55.026988  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.084315  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.135003  452488 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:50.405359  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.405454  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.417541  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:50.904703  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.904809  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.916106  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.404732  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.404823  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.418697  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.905352  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.905439  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.917655  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.404773  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.404858  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.417345  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.905434  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.905529  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.916604  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.404704  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.404820  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.416990  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.905624  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.905727  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.918455  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.404944  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.405034  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.419015  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.905601  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.905738  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.921252  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.661730  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Start
	I0109 00:09:53.661977  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring networks are active...
	I0109 00:09:53.662718  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network default is active
	I0109 00:09:53.663173  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network mk-old-k8s-version-003293 is active
	I0109 00:09:53.663701  451943 main.go:141] libmachine: (old-k8s-version-003293) Getting domain xml...
	I0109 00:09:53.664456  451943 main.go:141] libmachine: (old-k8s-version-003293) Creating domain...
	I0109 00:09:55.030325  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting to get IP...
	I0109 00:09:55.031241  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.031720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.031800  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.031693  453422 retry.go:31] will retry after 209.915867ms: waiting for machine to come up
	I0109 00:09:55.243218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.243740  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.243792  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.243678  453422 retry.go:31] will retry after 309.964884ms: waiting for machine to come up
	I0109 00:09:55.555468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.556044  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.556075  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.555982  453422 retry.go:31] will retry after 306.870224ms: waiting for machine to come up
	I0109 00:09:55.864558  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.865161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.865199  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.865113  453422 retry.go:31] will retry after 475.599739ms: waiting for machine to come up
	I0109 00:09:52.270751  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:54.271341  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:56.775574  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:55.136380  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:55.139749  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140142  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:55.140174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140387  452488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:55.145715  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:55.159881  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:55.159972  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:55.209715  452488 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:55.209814  452488 ssh_runner.go:195] Run: which lz4
	I0109 00:09:55.214766  452488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:55.219645  452488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:55.219683  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:57.101116  452488 crio.go:444] Took 1.886420 seconds to copy over tarball
	I0109 00:09:57.101207  452488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:55.405633  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.405734  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.420242  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:55.905578  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.905685  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.923018  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.405516  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.405602  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.420028  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.905320  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.905409  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.940464  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.404810  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.404925  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.420965  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.905566  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.905684  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.920601  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:58.404728  452237 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:58.404779  452237 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:58.404821  452237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:58.404906  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:58.450415  452237 cri.go:89] found id: ""
	I0109 00:09:58.450510  452237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:58.469938  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:58.481877  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:58.481963  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494336  452237 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494367  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:58.644325  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.472346  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.715956  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.857573  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.962996  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:59.963097  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:56.342815  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.343422  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.343456  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.343365  453422 retry.go:31] will retry after 512.8445ms: waiting for machine to come up
	I0109 00:09:56.858161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.858689  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.858720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.858631  453422 retry.go:31] will retry after 649.65221ms: waiting for machine to come up
	I0109 00:09:57.509509  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:57.510080  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:57.510121  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:57.510023  453422 retry.go:31] will retry after 1.153518379s: waiting for machine to come up
	I0109 00:09:58.665328  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:58.665946  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:58.665986  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:58.665886  453422 retry.go:31] will retry after 1.392576392s: waiting for machine to come up
	I0109 00:10:00.060701  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:00.061368  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:00.061416  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:00.061263  453422 retry.go:31] will retry after 1.185250663s: waiting for machine to come up
	I0109 00:09:59.270305  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:01.271958  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:00.887146  452488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.785897124s)
	I0109 00:10:00.887183  452488 crio.go:451] Took 3.786033 seconds to extract the tarball
	I0109 00:10:00.887196  452488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:00.940322  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:01.087742  452488 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:10:01.087778  452488 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:10:01.087861  452488 ssh_runner.go:195] Run: crio config
	I0109 00:10:01.154384  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:01.154411  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:01.154432  452488 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:01.154460  452488 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-834116 NodeName:default-k8s-diff-port-834116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:10:01.154664  452488 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-834116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:01.154768  452488 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-834116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0109 00:10:01.154837  452488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:10:01.165075  452488 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:01.165167  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:01.175380  452488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0109 00:10:01.198018  452488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:01.216515  452488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0109 00:10:01.238477  452488 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:01.242706  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:01.256799  452488 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116 for IP: 192.168.39.73
	I0109 00:10:01.256833  452488 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:01.257009  452488 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:01.257084  452488 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:01.257180  452488 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/client.key
	I0109 00:10:01.257272  452488 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key.8b49dc8b
	I0109 00:10:01.257330  452488 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key
	I0109 00:10:01.257473  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:01.257512  452488 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:01.257529  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:01.257582  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:01.257632  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:01.257674  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:01.257737  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:01.258699  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:01.288498  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:10:01.315010  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:01.342657  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:10:01.368423  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:01.394295  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:01.423461  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:01.452044  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:01.478834  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:01.505029  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:01.531765  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:01.557126  452488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:01.575037  452488 ssh_runner.go:195] Run: openssl version
	I0109 00:10:01.580971  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:01.592882  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598205  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598285  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.604293  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:01.615508  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:01.625979  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631195  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631268  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.637322  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:01.649611  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:01.661754  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667033  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667114  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.673312  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:01.687649  452488 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:01.694523  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:01.701260  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:01.709371  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:01.717249  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:01.724104  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:01.730706  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:01.738716  452488 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:01.738846  452488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:01.738935  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:01.789522  452488 cri.go:89] found id: ""
	I0109 00:10:01.789639  452488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:01.802440  452488 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:01.802470  452488 kubeadm.go:636] restartCluster start
	I0109 00:10:01.802530  452488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:01.814839  452488 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:01.816303  452488 kubeconfig.go:92] found "default-k8s-diff-port-834116" server: "https://192.168.39.73:8444"
	I0109 00:10:01.818978  452488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:01.829115  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:01.829200  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:01.841947  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:02.329489  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.329629  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.346716  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:00.463974  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:00.963295  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.463906  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.963508  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.463259  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.964275  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.464037  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.963542  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.998344  452237 api_server.go:72] duration metric: took 4.035357514s to wait for apiserver process to appear ...
	I0109 00:10:03.998383  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:03.998415  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:03.999025  452237 api_server.go:269] stopped: https://192.168.61.62:8443/healthz: Get "https://192.168.61.62:8443/healthz": dial tcp 192.168.61.62:8443: connect: connection refused
	I0109 00:10:04.498619  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:01.248726  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:01.249297  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:01.249334  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:01.249190  453422 retry.go:31] will retry after 2.101995832s: waiting for machine to come up
	I0109 00:10:03.353250  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:03.353837  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:03.353870  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:03.353803  453422 retry.go:31] will retry after 2.338357499s: waiting for machine to come up
	I0109 00:10:05.694257  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:05.694773  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:05.694805  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:05.694753  453422 retry.go:31] will retry after 2.962877462s: waiting for machine to come up
	I0109 00:10:03.772407  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:05.776569  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:02.829349  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.829477  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.845294  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.329917  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.330034  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.345877  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.829787  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.829908  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.845499  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.329869  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.329968  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.345228  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.829841  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.829964  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.841831  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.329392  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.329534  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.344928  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.829388  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.829490  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.845517  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.329745  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.329846  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.344692  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.829201  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.829339  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.844107  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.329562  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.329679  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.341888  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.617974  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.618015  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.618037  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:07.676283  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.676318  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.999237  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.036271  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.036307  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.498881  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.504457  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.504490  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.998535  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:09.009194  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:10:09.017267  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:10:09.017300  452237 api_server.go:131] duration metric: took 5.018909056s to wait for apiserver health ...
	I0109 00:10:09.017311  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:10:09.017319  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:09.019322  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:09.020666  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:09.030282  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:09.049477  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:09.063218  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:09.063264  452237 system_pods.go:61] "coredns-76f75df574-kw4v7" [6a2a3896-7b4c-4912-9e6a-0033564d211b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:09.063277  452237 system_pods.go:61] "etcd-no-preload-378213" [b650412b-fa3a-4490-9b43-caf6ac1cb8b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:09.063294  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [b372f056-7243-416e-905f-ba80a332005a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:09.063307  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8b32fab5-ef2b-4145-8cf8-8ec616a73798] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:09.063317  452237 system_pods.go:61] "kube-proxy-kxjqj" [40d27586-c2e4-407e-ac43-c0dbd851427e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:09.063325  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [2a609b1f-ce89-4e95-b56c-c84702352967] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:09.063343  452237 system_pods.go:61] "metrics-server-57f55c9bc5-th24j" [9f47b0d1-1399-4349-8f99-d85598461c68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:09.063383  452237 system_pods.go:61] "storage-provisioner" [f12f48e3-4e11-47e4-b785-ca9b47cbc0a4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:09.063396  452237 system_pods.go:74] duration metric: took 13.893709ms to wait for pod list to return data ...
	I0109 00:10:09.063407  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:09.067414  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:09.067457  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:09.067474  452237 node_conditions.go:105] duration metric: took 4.056143ms to run NodePressure ...
	I0109 00:10:09.067507  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:09.383666  452237 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389727  452237 kubeadm.go:787] kubelet initialised
	I0109 00:10:09.389749  452237 kubeadm.go:788] duration metric: took 6.05357ms waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389758  452237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:09.397162  452237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:08.658880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:08.659431  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:08.659468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:08.659353  453422 retry.go:31] will retry after 4.088487909s: waiting for machine to come up
	I0109 00:10:08.271546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:10.273183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:07.830081  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.830237  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.846118  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.329537  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.329642  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.345267  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.829229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.829351  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.845147  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.329244  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.329371  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.343552  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.829910  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.829999  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.841589  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.330229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.330316  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.346027  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.830077  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.830193  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.842301  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.329908  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.330029  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.341398  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.829904  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.830007  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.841281  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.841317  452488 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:11.841340  452488 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:11.841350  452488 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:11.841406  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:11.880872  452488 cri.go:89] found id: ""
	I0109 00:10:11.880993  452488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:11.896522  452488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:11.905372  452488 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:11.905452  452488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915053  452488 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915083  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:12.053489  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:11.406042  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:13.406387  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.752603  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753243  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has current primary IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753276  451943 main.go:141] libmachine: (old-k8s-version-003293) Found IP for machine: 192.168.72.81
	I0109 00:10:12.753290  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserving static IP address...
	I0109 00:10:12.753738  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.753770  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | skip adding static IP to network mk-old-k8s-version-003293 - found existing host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"}
	I0109 00:10:12.753790  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserved static IP address: 192.168.72.81
	I0109 00:10:12.753812  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting for SSH to be available...
	I0109 00:10:12.753829  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Getting to WaitForSSH function...
	I0109 00:10:12.756348  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756765  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.756798  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756931  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH client type: external
	I0109 00:10:12.756959  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa (-rw-------)
	I0109 00:10:12.756995  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:10:12.757008  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | About to run SSH command:
	I0109 00:10:12.757025  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | exit 0
	I0109 00:10:12.908563  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | SSH cmd err, output: <nil>: 
	I0109 00:10:12.909330  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetConfigRaw
	I0109 00:10:12.910245  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:12.913338  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.913744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.913778  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.914153  451943 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/config.json ...
	I0109 00:10:12.914422  451943 machine.go:88] provisioning docker machine ...
	I0109 00:10:12.914451  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:12.914678  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.914869  451943 buildroot.go:166] provisioning hostname "old-k8s-version-003293"
	I0109 00:10:12.914895  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.915042  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:12.917551  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.917918  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.917949  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.918083  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:12.918284  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918477  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918637  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:12.918824  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:12.919390  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:12.919409  451943 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003293 && echo "old-k8s-version-003293" | sudo tee /etc/hostname
	I0109 00:10:13.077570  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003293
	
	I0109 00:10:13.077613  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.081190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081575  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.081599  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081874  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.082128  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082377  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082568  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.082783  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.083268  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.083293  451943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003293/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:10:13.235134  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:10:13.235167  451943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:10:13.235216  451943 buildroot.go:174] setting up certificates
	I0109 00:10:13.235236  451943 provision.go:83] configureAuth start
	I0109 00:10:13.235254  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:13.235632  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:13.239282  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.239867  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.239902  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.240253  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.243109  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243516  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.243546  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243730  451943 provision.go:138] copyHostCerts
	I0109 00:10:13.243811  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:10:13.243826  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:10:13.243917  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:10:13.244095  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:10:13.244109  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:10:13.244139  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:10:13.244233  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:10:13.244244  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:10:13.244271  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:10:13.244357  451943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003293 san=[192.168.72.81 192.168.72.81 localhost 127.0.0.1 minikube old-k8s-version-003293]
	I0109 00:10:13.358229  451943 provision.go:172] copyRemoteCerts
	I0109 00:10:13.358298  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:10:13.358329  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.361495  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.361925  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.361961  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.362229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.362512  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.362707  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.362901  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:13.464633  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:10:13.491908  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:10:13.520424  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:10:13.551287  451943 provision.go:86] duration metric: configureAuth took 316.030603ms
	I0109 00:10:13.551322  451943 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:10:13.551588  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:10:13.551689  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.554570  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.554888  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.554941  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.555088  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.555402  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555595  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555803  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.555991  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.556435  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.556461  451943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:10:13.929994  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:10:13.930040  451943 machine.go:91] provisioned docker machine in 1.015597473s
	I0109 00:10:13.930056  451943 start.go:300] post-start starting for "old-k8s-version-003293" (driver="kvm2")
	I0109 00:10:13.930076  451943 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:10:13.930107  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:13.930498  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:10:13.930537  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.933680  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934172  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.934218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934589  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.934794  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.935029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.935189  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.038045  451943 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:10:14.044182  451943 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:10:14.044220  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:10:14.044315  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:10:14.044455  451943 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:10:14.044602  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:10:14.056820  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:14.083704  451943 start.go:303] post-start completed in 153.628012ms
	I0109 00:10:14.083736  451943 fix.go:56] fixHost completed within 20.447514213s
	I0109 00:10:14.083765  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.087190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087732  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.087776  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087968  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.088229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088467  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088630  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.088863  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:14.089367  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:14.089389  451943 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:10:14.224545  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704759014.163550757
	
	I0109 00:10:14.224580  451943 fix.go:206] guest clock: 1704759014.163550757
	I0109 00:10:14.224591  451943 fix.go:219] Guest: 2024-01-09 00:10:14.163550757 +0000 UTC Remote: 2024-01-09 00:10:14.083740733 +0000 UTC m=+363.223126670 (delta=79.810024ms)
	I0109 00:10:14.224620  451943 fix.go:190] guest clock delta is within tolerance: 79.810024ms
	I0109 00:10:14.224627  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 20.588443227s
	I0109 00:10:14.224659  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.224961  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:14.228116  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228565  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.228645  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228870  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229553  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229781  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229882  451943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:10:14.229958  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.230034  451943 ssh_runner.go:195] Run: cat /version.json
	I0109 00:10:14.230062  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.233060  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233305  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233484  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233511  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233691  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.233903  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233926  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233959  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234064  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.234220  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234290  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234400  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.234418  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234557  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.328685  451943 ssh_runner.go:195] Run: systemctl --version
	I0109 00:10:14.359854  451943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:10:14.515121  451943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:10:14.525585  451943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:10:14.525668  451943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:10:14.549678  451943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:10:14.549719  451943 start.go:475] detecting cgroup driver to use...
	I0109 00:10:14.549804  451943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:10:14.569734  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:10:14.587820  451943 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:10:14.587921  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:10:14.601724  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:10:14.615402  451943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:10:14.732774  451943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:10:14.872480  451943 docker.go:219] disabling docker service ...
	I0109 00:10:14.872579  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:10:14.887044  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:10:14.904944  451943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:10:15.043833  451943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:10:15.162992  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:10:15.176677  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:10:15.197594  451943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0109 00:10:15.197674  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.207993  451943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:10:15.208071  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.218230  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.228291  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.238163  451943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:10:15.248394  451943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:10:15.257457  451943 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:10:15.257541  451943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:10:15.271604  451943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:10:15.282409  451943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:10:15.401506  451943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:10:15.586851  451943 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:10:15.586942  451943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:10:15.593734  451943 start.go:543] Will wait 60s for crictl version
	I0109 00:10:15.593798  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:15.598705  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:10:15.642640  451943 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:10:15.642751  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.714964  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.773793  451943 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0109 00:10:15.775287  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:15.778313  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.778769  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:15.778795  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.779046  451943 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:10:15.783496  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:15.795338  451943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0109 00:10:15.795427  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:15.844077  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:15.844162  451943 ssh_runner.go:195] Run: which lz4
	I0109 00:10:15.848502  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:10:15.852893  451943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:10:15.852949  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0109 00:10:12.274183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:14.770967  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:16.781482  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.786247  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.017442  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.128701  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.223775  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:13.223873  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:13.724895  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.224593  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.724375  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.224993  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.724059  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.747019  452488 api_server.go:72] duration metric: took 2.523230788s to wait for apiserver process to appear ...
	I0109 00:10:15.747056  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:15.747083  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.747711  452488 api_server.go:269] stopped: https://192.168.39.73:8444/healthz: Get "https://192.168.39.73:8444/healthz": dial tcp 192.168.39.73:8444: connect: connection refused
	I0109 00:10:16.247411  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.407079  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.407307  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:19.407533  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.632956  451943 crio.go:444] Took 1.784489 seconds to copy over tarball
	I0109 00:10:17.633087  451943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:10:19.999506  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:19.999551  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:19.999569  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.066949  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:20.066982  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:20.247460  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.256943  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.256985  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:20.747576  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.755833  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.755892  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:21.247473  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:21.255476  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:10:21.266074  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:10:21.266115  452488 api_server.go:131] duration metric: took 5.519049271s to wait for apiserver health ...
	I0109 00:10:21.266127  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:21.266136  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:21.401812  452488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:19.272981  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.770765  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.903126  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:21.921050  452488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:21.946628  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:21.959029  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:21.959077  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:21.959089  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:21.959100  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:21.959110  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:21.959125  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:21.959141  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:21.959149  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:21.959165  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:21.959178  452488 system_pods.go:74] duration metric: took 12.524667ms to wait for pod list to return data ...
	I0109 00:10:21.959198  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:21.963572  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:21.963614  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:21.963629  452488 node_conditions.go:105] duration metric: took 4.420685ms to run NodePressure ...
	I0109 00:10:21.963653  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:23.566660  452488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.602978271s)
	I0109 00:10:23.566704  452488 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573882  452488 kubeadm.go:787] kubelet initialised
	I0109 00:10:23.573911  452488 kubeadm.go:788] duration metric: took 7.19484ms waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573923  452488 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:23.590206  452488 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.603347  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603402  452488 pod_ready.go:81] duration metric: took 13.169776ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.603416  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603426  452488 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.614946  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.614986  452488 pod_ready.go:81] duration metric: took 11.548332ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.615003  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.615012  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.628345  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628378  452488 pod_ready.go:81] duration metric: took 13.353873ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.628389  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628396  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.635987  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636023  452488 pod_ready.go:81] duration metric: took 7.619372ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.636043  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636072  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.972993  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973028  452488 pod_ready.go:81] duration metric: took 336.946722ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.973040  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973046  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.371951  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.371991  452488 pod_ready.go:81] duration metric: took 398.932785ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.372016  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.372026  452488 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.775778  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775825  452488 pod_ready.go:81] duration metric: took 403.787436ms waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.775842  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775867  452488 pod_ready.go:38] duration metric: took 1.201917208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:24.775895  452488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:10:24.793136  452488 ops.go:34] apiserver oom_adj: -16
	I0109 00:10:24.793169  452488 kubeadm.go:640] restartCluster took 22.990690796s
	I0109 00:10:24.793182  452488 kubeadm.go:406] StartCluster complete in 23.05448254s
	I0109 00:10:24.793207  452488 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.793302  452488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:10:24.795707  452488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.796107  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:10:24.796368  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:10:24.796346  452488 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:10:24.796413  452488 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796432  452488 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796457  452488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-834116"
	I0109 00:10:24.796466  452488 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.796477  452488 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:10:24.796560  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796982  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.796998  452488 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.797017  452488 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:24.797020  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0109 00:10:24.797025  452488 addons.go:246] addon metrics-server should already be in state true
	I0109 00:10:24.797083  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797296  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.797477  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797513  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.803857  452488 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-834116" context rescaled to 1 replicas
	I0109 00:10:24.803958  452488 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:10:24.806278  452488 out.go:177] * Verifying Kubernetes components...
	I0109 00:10:24.807850  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:10:24.817319  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0109 00:10:24.817600  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0109 00:10:24.817766  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818023  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818247  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818270  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.818697  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.818899  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818913  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0109 00:10:24.818937  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.819412  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.819459  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.823502  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.823611  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.824834  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.824859  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.824880  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.825291  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.826131  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.826160  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.829056  452488 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.829115  452488 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:10:24.829158  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.829610  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.829968  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.839969  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0109 00:10:24.840508  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.841140  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.841167  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.841542  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.841864  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.843844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.846088  452488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:24.844882  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0109 00:10:24.848051  452488 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:24.848069  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:10:24.848093  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.848445  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.849053  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.849074  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.849484  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.849550  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0109 00:10:24.849671  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.851401  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.851914  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.851961  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.853938  452488 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:10:22.516402  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:24.907337  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.059397  451943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.42624365s)
	I0109 00:10:21.059430  451943 crio.go:451] Took 3.426440 seconds to extract the tarball
	I0109 00:10:21.059441  451943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:21.109544  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:21.177321  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:21.177353  451943 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:10:21.177408  451943 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.177455  451943 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.177499  451943 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.177679  451943 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.177728  451943 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.177688  451943 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179256  451943 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179325  451943 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0109 00:10:21.179257  451943 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.179429  451943 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.179551  451943 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.179599  451943 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.179888  451943 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.180077  451943 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.354975  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0109 00:10:21.363097  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.390461  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.393703  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.423416  451943 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0109 00:10:21.423475  451943 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0109 00:10:21.423523  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.433698  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.446038  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.466118  451943 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0109 00:10:21.466213  451943 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.466351  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.499618  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.516687  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.517553  451943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0109 00:10:21.517576  451943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0109 00:10:21.517608  451943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.517642  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0109 00:10:21.517653  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.517609  451943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.517735  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.543109  451943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0109 00:10:21.543170  451943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.543228  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571015  451943 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0109 00:10:21.571069  451943 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.571122  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571130  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.627517  451943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0109 00:10:21.627573  451943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.627623  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.730620  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0109 00:10:21.730693  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.730751  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.730772  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.730775  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.730876  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.730899  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0109 00:10:21.730965  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.861219  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0109 00:10:21.861308  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0109 00:10:21.870996  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0109 00:10:21.871033  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0109 00:10:21.871087  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0109 00:10:21.871117  451943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0109 00:10:21.871136  451943 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.871176  451943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0109 00:10:23.431278  451943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.560066098s)
	I0109 00:10:23.431320  451943 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0109 00:10:23.431403  451943 cache_images.go:92] LoadImages completed in 2.25403413s
	W0109 00:10:23.431502  451943 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0109 00:10:23.431630  451943 ssh_runner.go:195] Run: crio config
	I0109 00:10:23.501412  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:23.501437  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:23.501460  451943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:23.501478  451943 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003293 NodeName:old-k8s-version-003293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0109 00:10:23.501642  451943 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003293"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-003293
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.81:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:23.501740  451943 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003293 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:10:23.501815  451943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0109 00:10:23.515496  451943 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:23.515613  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:23.528701  451943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0109 00:10:23.549023  451943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:23.568686  451943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0109 00:10:23.588702  451943 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:23.593056  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:23.609254  451943 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293 for IP: 192.168.72.81
	I0109 00:10:23.609338  451943 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:23.609556  451943 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:23.609643  451943 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:23.609767  451943 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/client.key
	I0109 00:10:23.609842  451943 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key.289ddd16
	I0109 00:10:23.609908  451943 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key
	I0109 00:10:23.610069  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:23.610137  451943 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:23.610158  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:23.610197  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:23.610232  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:23.610265  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:23.610323  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:23.611274  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:23.637653  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0109 00:10:23.664578  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:23.694133  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:10:23.722658  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:23.750223  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:23.778539  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:23.802865  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:23.829553  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:23.857468  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:23.886744  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:23.913384  451943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:23.931928  451943 ssh_runner.go:195] Run: openssl version
	I0109 00:10:23.938105  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:23.949750  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955870  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955954  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.962486  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:23.975292  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:23.988504  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.993956  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.994025  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:24.000015  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:24.010775  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:24.021665  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026909  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026972  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.032957  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:24.043813  451943 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:24.048745  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:24.055015  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:24.061551  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:24.068075  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:24.075942  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:24.081898  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:24.088900  451943 kubeadm.go:404] StartCluster: {Name:old-k8s-version-003293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:24.089008  451943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:24.089075  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:24.138907  451943 cri.go:89] found id: ""
	I0109 00:10:24.139089  451943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:24.152607  451943 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:24.152636  451943 kubeadm.go:636] restartCluster start
	I0109 00:10:24.152696  451943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:24.166246  451943 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.167660  451943 kubeconfig.go:92] found "old-k8s-version-003293" server: "https://192.168.72.81:8443"
	I0109 00:10:24.171161  451943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:24.183456  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.183533  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.197246  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.684537  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.684670  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.698158  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.184562  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.184662  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.196624  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.684258  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.684379  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.699808  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.852491  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.852608  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.852621  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.855293  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.855444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.855453  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:10:24.855467  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:10:24.855484  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.855664  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.855746  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.855858  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.856036  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.857435  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.857481  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.858678  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859181  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.859219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859402  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.859570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.859724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.859856  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.875791  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0109 00:10:24.876275  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.876817  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.876856  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.877200  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.877454  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.879333  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.879644  452488 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:24.879661  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:10:24.879677  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.882683  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.883208  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883504  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.883694  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.883877  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.884070  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:25.036727  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:25.071034  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:10:25.071059  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:10:25.079722  452488 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:25.079745  452488 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0109 00:10:25.096822  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:25.107155  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:10:25.107187  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:10:25.149550  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:25.149576  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:10:25.202736  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:26.696247  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.659482228s)
	I0109 00:10:26.696317  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696334  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696330  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.599464128s)
	I0109 00:10:26.696379  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696398  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696816  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696856  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696855  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696865  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696874  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696883  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696899  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696908  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696935  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696945  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.697254  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697306  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697406  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697461  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697410  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.712803  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.712835  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.713140  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.713162  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736360  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.533581555s)
	I0109 00:10:26.736408  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.736780  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.736826  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.736841  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736852  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.737154  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.737190  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.737205  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.737215  452488 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:26.739310  452488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:10:23.774928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.270567  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.740691  452488 addons.go:508] enable addons completed in 1.94435105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:10:27.084669  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:27.404032  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.407712  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.184150  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.184272  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.196020  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:26.684603  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.684710  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.699571  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.184212  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.184309  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.196193  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.684572  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.684658  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.697405  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.183918  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.184043  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.197428  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.684565  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.684683  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.698124  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.183601  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.183725  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.195941  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.683554  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.683647  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.695548  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.184015  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.184116  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.196332  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.684533  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.684661  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.697315  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.771203  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.269907  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.584966  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:30.585616  452488 node_ready.go:49] node "default-k8s-diff-port-834116" has status "Ready":"True"
	I0109 00:10:30.585646  452488 node_ready.go:38] duration metric: took 5.505876157s waiting for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:30.585661  452488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:30.593510  452488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602388  452488 pod_ready.go:92] pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.602420  452488 pod_ready.go:81] duration metric: took 8.875538ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602438  452488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608316  452488 pod_ready.go:92] pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.608343  452488 pod_ready.go:81] duration metric: took 5.896652ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608355  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614031  452488 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.614056  452488 pod_ready.go:81] duration metric: took 5.692676ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614068  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619101  452488 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.619120  452488 pod_ready.go:81] duration metric: took 5.045637ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619129  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986089  452488 pod_ready.go:92] pod "kube-proxy-p9dmf" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.986121  452488 pod_ready.go:81] duration metric: took 366.984678ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986135  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385215  452488 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:31.385244  452488 pod_ready.go:81] duration metric: took 399.100168ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385254  452488 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.904561  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.905393  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.183976  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.184088  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:31.683769  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.683876  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.695944  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.184543  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.184631  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.197273  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.683504  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.683613  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.696431  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.183904  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.183981  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.195623  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.684295  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.684408  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.697442  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.184151  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:34.184264  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:34.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.196409  451943 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:34.196451  451943 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:34.196467  451943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:34.196558  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:34.243566  451943 cri.go:89] found id: ""
	I0109 00:10:34.243656  451943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:34.260912  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:34.270763  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:34.270859  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280082  451943 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280114  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:34.411011  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.279804  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.503377  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.616758  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.707051  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:35.707153  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:33.771119  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.271823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.399336  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.893942  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.905685  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:38.408847  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.207669  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:36.708189  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.207300  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.259562  451943 api_server.go:72] duration metric: took 1.552509336s to wait for apiserver process to appear ...
	I0109 00:10:37.259602  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:37.259628  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:38.272478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.272571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:37.894659  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.393328  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.393530  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.260559  451943 api_server.go:269] stopped: https://192.168.72.81:8443/healthz: Get "https://192.168.72.81:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0109 00:10:42.260609  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.136163  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.136216  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.136236  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.196804  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.196846  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.260001  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.270495  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.270549  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:43.759989  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.813746  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.813787  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.260614  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.271111  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:44.271144  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.760496  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.771584  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:10:44.780881  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:10:44.780911  451943 api_server.go:131] duration metric: took 7.521300216s to wait for apiserver health ...
	I0109 00:10:44.780923  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:44.780933  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:44.783223  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:40.906182  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:43.407169  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.784832  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:44.802495  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:44.821665  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:44.832420  451943 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:44.832452  451943 system_pods.go:61] "coredns-5644d7b6d9-5hqlw" [b6d5e87b-e72e-47bb-92b2-afecece262c5] Running
	I0109 00:10:44.832456  451943 system_pods.go:61] "coredns-5644d7b6d9-j4nnt" [d8995b4a-0ebf-406b-9937-09ba09591c78] Running
	I0109 00:10:44.832462  451943 system_pods.go:61] "etcd-old-k8s-version-003293" [8b9f9b32-dfe9-4cfe-856b-3aec43645e1e] Running
	I0109 00:10:44.832467  451943 system_pods.go:61] "kube-apiserver-old-k8s-version-003293" [48f5c692-7501-45ae-a53a-49e330129c36] Running
	I0109 00:10:44.832471  451943 system_pods.go:61] "kube-controller-manager-old-k8s-version-003293" [e458a3e9-ae8b-4ab7-bdc5-61b4321cca4a] Running
	I0109 00:10:44.832475  451943 system_pods.go:61] "kube-proxy-bc4tl" [74020495-07c6-441b-9b46-2f6a103d65eb] Running
	I0109 00:10:44.832478  451943 system_pods.go:61] "kube-scheduler-old-k8s-version-003293" [6a8e330c-f4bb-4bfd-b610-9071077fbb0f] Running
	I0109 00:10:44.832482  451943 system_pods.go:61] "storage-provisioner" [cbfd54c3-1952-4c0f-9272-29e2a8a4d5ed] Running
	I0109 00:10:44.832489  451943 system_pods.go:74] duration metric: took 10.801262ms to wait for pod list to return data ...
	I0109 00:10:44.832498  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:44.836130  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:44.836175  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:44.836196  451943 node_conditions.go:105] duration metric: took 3.685161ms to run NodePressure ...
	I0109 00:10:44.836220  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:45.117528  451943 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:45.121965  451943 retry.go:31] will retry after 324.075641ms: kubelet not initialised
	I0109 00:10:45.451702  451943 retry.go:31] will retry after 510.869227ms: kubelet not initialised
	I0109 00:10:42.770145  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.271625  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.394539  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:46.894669  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.910325  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:48.406435  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.969561  451943 retry.go:31] will retry after 435.571732ms: kubelet not initialised
	I0109 00:10:46.411948  451943 retry.go:31] will retry after 1.046618493s: kubelet not initialised
	I0109 00:10:47.471972  451943 retry.go:31] will retry after 1.328746031s: kubelet not initialised
	I0109 00:10:48.805606  451943 retry.go:31] will retry after 1.964166074s: kubelet not initialised
	I0109 00:10:50.776656  451943 retry.go:31] will retry after 2.966424358s: kubelet not initialised
	I0109 00:10:47.271965  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.773571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.393384  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:51.393857  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:50.905980  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:52.404441  452237 pod_ready.go:92] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.404467  452237 pod_ready.go:81] duration metric: took 43.007278698s waiting for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.404477  452237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409827  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.409851  452237 pod_ready.go:81] duration metric: took 5.368556ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409862  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415211  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.415233  452237 pod_ready.go:81] duration metric: took 5.363915ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415243  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420309  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.420329  452237 pod_ready.go:81] duration metric: took 5.078283ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420337  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425229  452237 pod_ready.go:92] pod "kube-proxy-kxjqj" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.425251  452237 pod_ready.go:81] duration metric: took 4.908776ms waiting for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425260  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.801958  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.801989  452237 pod_ready.go:81] duration metric: took 376.723222ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.802000  452237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:54.811346  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.748552  451943 retry.go:31] will retry after 3.201777002s: kubelet not initialised
	I0109 00:10:52.273938  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:54.771590  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.775438  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.422099  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:55.894657  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:57.310528  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:59.313642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.956459  451943 retry.go:31] will retry after 6.469663917s: kubelet not initialised
	I0109 00:10:59.272417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.272940  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:58.393999  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:00.893766  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.809942  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.309972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:03.432087  451943 retry.go:31] will retry after 13.730562228s: kubelet not initialised
	I0109 00:11:03.771273  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.268462  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:02.894171  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.894858  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:07.393254  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.310613  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.812051  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.270554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:10.272757  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:09.893982  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.894729  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.310615  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:13.311452  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:12.770003  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.770452  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.393106  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:16.394348  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:15.809972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.309870  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:17.168682  451943 retry.go:31] will retry after 14.832819941s: kubelet not initialised
	I0109 00:11:17.271266  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:19.271908  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.771727  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.394025  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:20.808968  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:22.810167  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.773732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:26.269527  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.394213  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.893851  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.310683  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:27.810354  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:29.814175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.271026  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.271149  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.393310  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.393582  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.310474  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.312045  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.007072  451943 kubeadm.go:787] kubelet initialised
	I0109 00:11:32.007097  451943 kubeadm.go:788] duration metric: took 46.889534921s waiting for restarted kubelet to initialise ...
	I0109 00:11:32.007109  451943 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:11:32.012969  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018937  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.018957  451943 pod_ready.go:81] duration metric: took 5.963591ms waiting for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018975  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028039  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.028067  451943 pod_ready.go:81] duration metric: took 9.084525ms waiting for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028078  451943 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032808  451943 pod_ready.go:92] pod "etcd-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.032832  451943 pod_ready.go:81] duration metric: took 4.746043ms waiting for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032843  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037435  451943 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.037466  451943 pod_ready.go:81] duration metric: took 4.610014ms waiting for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037478  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405716  451943 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.405742  451943 pod_ready.go:81] duration metric: took 368.257236ms waiting for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405760  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806721  451943 pod_ready.go:92] pod "kube-proxy-bc4tl" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.806747  451943 pod_ready.go:81] duration metric: took 400.981273ms waiting for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806756  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205810  451943 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:33.205840  451943 pod_ready.go:81] duration metric: took 399.074693ms waiting for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205855  451943 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:35.213679  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.271553  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.773998  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.893079  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:35.393616  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.393839  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:36.809214  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:38.809702  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.714222  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.213748  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.270073  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.270564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.771950  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.894200  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.895632  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.810676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:43.310394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:42.214955  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.713236  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.270745  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.769008  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.395323  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.893378  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:45.811067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.310292  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.713278  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:49.212583  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.769858  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.270380  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.894013  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.896386  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.311125  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:52.809499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:54.811339  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.213641  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.214157  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.711725  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.271867  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.771478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.393541  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.894575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.310953  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:59.809359  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.713429  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.215472  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.270445  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.770718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.393555  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:01.810389  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:04.311994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:02.713532  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.213545  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.270633  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.771349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.392243  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.393601  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:06.809758  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.310090  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.713345  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.713636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.774207  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:10.271536  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.892992  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.894465  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.394064  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.310240  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.311902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.713857  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.714968  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.770737  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.271471  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:14.893031  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.393146  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.312766  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.808902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:16.213122  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:18.215771  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.713269  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.772762  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.274611  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:19.399686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:21.895279  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.315434  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.809703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.813460  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:23.215054  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.216598  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.771192  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.271732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.392768  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:26.393642  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.309913  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.310558  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.713280  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.713388  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.771683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.269862  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:28.892939  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.894280  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:31.310860  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.313161  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.215375  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.713965  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.271111  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.770162  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.393271  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.393849  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.811747  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:38.311158  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.212773  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.712777  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.273180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.274403  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.770772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.893508  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.893834  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.394002  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:40.311402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.809836  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.714285  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.213161  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:43.772982  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.269879  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.893044  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.894333  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:45.310764  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:47.810622  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.213392  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.214029  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.712956  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.273388  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.772779  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:49.393068  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:51.894350  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.314344  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:52.809208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.809757  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.213473  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.213609  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.270014  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.270513  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.392981  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:56.896752  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.310923  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.809897  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.713409  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:00.213074  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.771956  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.772597  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.776736  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.392477  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.393047  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.810055  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.316038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:02.214227  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.714073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.271552  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.274081  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:03.394211  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:05.892722  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.808153  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.809658  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.213252  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:09.214016  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.771514  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.271265  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.893535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.394062  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.811210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.309480  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.713294  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.714070  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.274656  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.770363  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:12.892232  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:14.892967  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.893970  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.309955  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.310537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.312112  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.213649  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:18.712398  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:20.713447  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.770504  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.776344  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.391934  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.393412  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.809067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.811245  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.715248  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.215489  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.270417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:24.276304  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.771255  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.892801  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.395553  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.815479  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.309581  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:27.713470  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:29.713667  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.772564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.270216  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.892655  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.893557  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.310454  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.311950  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.809831  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.714418  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.213103  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:33.270895  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.772159  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.894686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.393366  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.810699  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.315029  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.217502  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:38.713073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.772491  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:40.269651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.894503  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.895994  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.393607  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.808659  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.809657  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.212704  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.713415  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.270157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.769816  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.770516  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.394641  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.895010  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.310425  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.310812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.213445  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.714493  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:49.270269  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.262625  451984 pod_ready.go:81] duration metric: took 4m0.000332739s waiting for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	E0109 00:13:50.262665  451984 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:13:50.262695  451984 pod_ready.go:38] duration metric: took 4m14.064299354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:13:50.262735  451984 kubeadm.go:640] restartCluster took 4m35.223413047s
	W0109 00:13:50.262837  451984 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:13:50.262989  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:13:49.394039  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.893287  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.809875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.311275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.214302  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.215860  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.714407  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.893351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.895250  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.811061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:57.811763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.213089  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.214795  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.393252  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.394330  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.395864  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:03.952243  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.689217944s)
	I0109 00:14:03.952404  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:03.965852  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:14:03.975784  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:14:03.984599  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:14:03.984649  451984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:14:04.041116  451984 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:14:04.041179  451984 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:14:04.213643  451984 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:14:04.213797  451984 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:14:04.213932  451984 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:14:04.470597  451984 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:14:00.312213  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.813799  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.816592  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.472836  451984 out.go:204]   - Generating certificates and keys ...
	I0109 00:14:04.473031  451984 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:14:04.473115  451984 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:14:04.473210  451984 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:14:04.473272  451984 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:14:04.473376  451984 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:14:04.473804  451984 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:14:04.474373  451984 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:14:04.474832  451984 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:14:04.475386  451984 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:14:04.475875  451984 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:14:04.476290  451984 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:14:04.476378  451984 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:14:04.599856  451984 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:14:04.905946  451984 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:14:05.274703  451984 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:14:05.463087  451984 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:14:05.464020  451984 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:14:05.468993  451984 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:14:02.215257  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.714764  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:05.471038  451984 out.go:204]   - Booting up control plane ...
	I0109 00:14:05.471146  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:14:05.471245  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:14:05.471342  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:14:05.488208  451984 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:14:05.489177  451984 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:14:05.489282  451984 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:14:05.629700  451984 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:14:04.895593  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.396575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.310589  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.809734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.212902  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.214384  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.895351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:12.397437  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.633863  451984 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004133 seconds
	I0109 00:14:13.634067  451984 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:14:13.657224  451984 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:14:14.196593  451984 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:14:14.196798  451984 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-845373 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:14:14.715124  451984 kubeadm.go:322] [bootstrap-token] Using token: 0z1u86.ex8qfq3o12xtqu87
	I0109 00:14:14.716600  451984 out.go:204]   - Configuring RBAC rules ...
	I0109 00:14:14.716727  451984 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:14:14.724791  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:14:14.734361  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:14:14.742345  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:14:14.749616  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:14:14.753942  451984 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:14:14.774188  451984 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:14:15.042710  451984 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:14:15.131751  451984 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:14:15.132745  451984 kubeadm.go:322] 
	I0109 00:14:15.132804  451984 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:14:15.132810  451984 kubeadm.go:322] 
	I0109 00:14:15.132872  451984 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:14:15.132879  451984 kubeadm.go:322] 
	I0109 00:14:15.132898  451984 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:14:15.132959  451984 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:14:15.133067  451984 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:14:15.133094  451984 kubeadm.go:322] 
	I0109 00:14:15.133160  451984 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:14:15.133173  451984 kubeadm.go:322] 
	I0109 00:14:15.133229  451984 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:14:15.133241  451984 kubeadm.go:322] 
	I0109 00:14:15.133313  451984 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:14:15.133412  451984 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:14:15.133510  451984 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:14:15.133524  451984 kubeadm.go:322] 
	I0109 00:14:15.133644  451984 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:14:15.133761  451984 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:14:15.133777  451984 kubeadm.go:322] 
	I0109 00:14:15.133882  451984 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134003  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:14:15.134030  451984 kubeadm.go:322] 	--control-plane 
	I0109 00:14:15.134037  451984 kubeadm.go:322] 
	I0109 00:14:15.134137  451984 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:14:15.134145  451984 kubeadm.go:322] 
	I0109 00:14:15.134240  451984 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134415  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:14:15.135483  451984 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:14:15.135524  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:14:15.135536  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:14:15.137331  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:14:11.810358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.813252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:11.214971  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.713322  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.714895  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.138794  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:14:15.164722  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:14:15.236472  451984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:14:15.236536  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.236558  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=embed-certs-845373 minikube.k8s.io/updated_at=2024_01_09T00_14_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.353564  451984 ops.go:34] apiserver oom_adj: -16
	I0109 00:14:15.675801  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.176590  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.676619  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:17.176120  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:14.893438  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.896780  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.311939  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.312023  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.213002  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.214958  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:17.676614  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.176469  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.676367  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.176646  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.676613  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.176615  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.676641  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.176075  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.676489  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:22.176784  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.395936  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:21.892353  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.810687  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.810879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.713569  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:25.213852  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.676054  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.176662  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.676911  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.175927  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.676685  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.176625  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.676281  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.176650  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.675943  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.176834  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.894745  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:26.394535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.676594  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.846642  451984 kubeadm.go:1088] duration metric: took 12.610179243s to wait for elevateKubeSystemPrivileges.
	I0109 00:14:27.846694  451984 kubeadm.go:406] StartCluster complete in 5m12.860674926s
	I0109 00:14:27.846775  451984 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.846922  451984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:14:27.849568  451984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.849886  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:14:27.850039  451984 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:14:27.850143  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:14:27.850168  451984 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845373"
	I0109 00:14:27.850185  451984 addons.go:69] Setting metrics-server=true in profile "embed-certs-845373"
	I0109 00:14:27.850196  451984 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-845373"
	W0109 00:14:27.850206  451984 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:14:27.850209  451984 addons.go:237] Setting addon metrics-server=true in "embed-certs-845373"
	W0109 00:14:27.850226  451984 addons.go:246] addon metrics-server should already be in state true
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850780  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850804  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850886  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850916  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850174  451984 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845373"
	I0109 00:14:27.850983  451984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845373"
	I0109 00:14:27.851436  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.851473  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.869118  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0109 00:14:27.869634  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.870272  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.870301  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.870793  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.870883  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0109 00:14:27.871047  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0109 00:14:27.871320  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871380  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871694  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.871740  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.871880  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871910  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.871917  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871934  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.872311  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872318  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872472  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.872864  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.872907  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.875833  451984 addons.go:237] Setting addon default-storageclass=true in "embed-certs-845373"
	W0109 00:14:27.875851  451984 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:14:27.875874  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.876143  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.876172  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0109 00:14:27.892642  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0109 00:14:27.893165  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893218  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893382  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893725  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893751  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.893889  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893906  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894287  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894344  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894351  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.894366  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894531  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.894905  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894920  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.894955  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.895325  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.897315  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.897565  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.899343  451984 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:14:27.901058  451984 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:14:27.903097  451984 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:27.903113  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:14:27.903129  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.901085  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:14:27.903182  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:14:27.903190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.907703  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908100  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908474  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908505  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908744  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908765  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908869  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.908924  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.909079  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909118  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909274  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909303  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909444  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.909660  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.913404  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0109 00:14:27.913992  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.914388  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.914409  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.914831  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.915055  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.916650  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.916872  451984 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:27.916891  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:14:27.916911  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.919557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.919945  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.919962  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.920188  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.920346  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.920520  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.920627  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:28.169436  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:14:28.180527  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:28.194004  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:14:28.194025  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:14:28.216619  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:28.258292  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:14:28.258321  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:14:28.320624  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:28.320652  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:14:28.355471  451984 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-845373" context rescaled to 1 replicas
	I0109 00:14:28.355514  451984 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:14:28.357573  451984 out.go:177] * Verifying Kubernetes components...
	I0109 00:14:25.309676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.312462  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:29.810262  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:28.359075  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:28.379542  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:30.061115  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.891626144s)
	I0109 00:14:30.061149  451984 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0109 00:14:30.452861  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.236197297s)
	I0109 00:14:30.452929  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.452943  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.452943  451984 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.09383281s)
	I0109 00:14:30.453122  451984 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.453131  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.272573904s)
	I0109 00:14:30.453293  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453306  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453320  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453311  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453332  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453342  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453674  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453693  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453700  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453708  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453740  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453752  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453764  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453784  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.454074  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.454093  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.454107  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.457209  451984 node_ready.go:49] node "embed-certs-845373" has status "Ready":"True"
	I0109 00:14:30.457229  451984 node_ready.go:38] duration metric: took 4.077361ms waiting for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.457238  451984 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:30.488244  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.488275  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.488609  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.488634  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.488660  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.489887  451984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:30.508615  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.129028413s)
	I0109 00:14:30.508663  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.508677  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.508966  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.509058  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509152  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509175  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.509190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.509535  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509564  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509578  451984 addons.go:473] Verifying addon metrics-server=true in "embed-certs-845373"
	I0109 00:14:30.509582  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.511636  451984 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:14:27.714663  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.213049  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.513246  451984 addons.go:508] enable addons completed in 2.663216413s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:14:31.999091  451984 pod_ready.go:92] pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:31.999122  451984 pod_ready.go:81] duration metric: took 1.509214799s waiting for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:31.999131  451984 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005047  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.005077  451984 pod_ready.go:81] duration metric: took 5.937291ms waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005091  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011823  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.011853  451984 pod_ready.go:81] duration metric: took 6.752071ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011866  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017760  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.017782  451984 pod_ready.go:81] duration metric: took 5.908986ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017792  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058063  451984 pod_ready.go:92] pod "kube-proxy-nxtn2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.058094  451984 pod_ready.go:81] duration metric: took 40.295825ms waiting for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058104  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:28.397781  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.894153  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:31.394151  452488 pod_ready.go:81] duration metric: took 4m0.008881128s waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:31.394180  452488 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:14:31.394191  452488 pod_ready.go:38] duration metric: took 4m0.808517944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:31.394210  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:14:31.394307  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:31.394397  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:31.457897  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:31.457929  452488 cri.go:89] found id: ""
	I0109 00:14:31.457941  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:31.458002  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.463534  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:31.463632  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:31.524249  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:31.524284  452488 cri.go:89] found id: ""
	I0109 00:14:31.524296  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:31.524363  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.529188  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:31.529260  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:31.583505  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:31.583543  452488 cri.go:89] found id: ""
	I0109 00:14:31.583554  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:31.583618  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.589373  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:31.589466  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:31.639895  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:31.639931  452488 cri.go:89] found id: ""
	I0109 00:14:31.639942  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:31.640016  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.644881  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:31.644952  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:31.686002  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.686031  452488 cri.go:89] found id: ""
	I0109 00:14:31.686047  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:31.686114  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.691664  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:31.691754  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:31.745729  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:31.745757  452488 cri.go:89] found id: ""
	I0109 00:14:31.745766  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:31.745829  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.751116  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:31.751192  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:31.794856  452488 cri.go:89] found id: ""
	I0109 00:14:31.794890  452488 logs.go:284] 0 containers: []
	W0109 00:14:31.794901  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:31.794909  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:31.794976  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:31.840973  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:31.840999  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:31.841006  452488 cri.go:89] found id: ""
	I0109 00:14:31.841014  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:31.841084  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.845852  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.850824  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:31.850851  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:31.914344  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:31.914404  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.958899  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:31.958934  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:32.021319  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:32.021353  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:32.074995  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:32.075034  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:32.089535  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:32.089572  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:32.244418  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:32.244460  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:32.288116  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:32.288161  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:32.332939  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:32.332980  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:32.378455  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:32.378487  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:32.437376  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:32.437421  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:31.813208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.311338  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.215522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.712223  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.460309  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.460343  451984 pod_ready.go:81] duration metric: took 402.230769ms waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.460358  451984 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:34.470103  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.470854  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.911300  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:32.911345  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:32.959902  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:32.959942  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.500402  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:14:35.516569  452488 api_server.go:72] duration metric: took 4m10.712558057s to wait for apiserver process to appear ...
	I0109 00:14:35.516600  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:14:35.516640  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:35.516690  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:35.559395  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:35.559421  452488 cri.go:89] found id: ""
	I0109 00:14:35.559429  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:35.559497  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.564381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:35.564468  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:35.604963  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:35.604991  452488 cri.go:89] found id: ""
	I0109 00:14:35.605004  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:35.605074  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.610352  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:35.610412  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:35.655316  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.655353  452488 cri.go:89] found id: ""
	I0109 00:14:35.655381  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:35.655471  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.660932  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:35.661015  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:35.702201  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:35.702228  452488 cri.go:89] found id: ""
	I0109 00:14:35.702237  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:35.702297  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.707544  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:35.707615  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:35.755445  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:35.755478  452488 cri.go:89] found id: ""
	I0109 00:14:35.755489  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:35.755555  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.760393  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:35.760470  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:35.813641  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:35.813672  452488 cri.go:89] found id: ""
	I0109 00:14:35.813682  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:35.813749  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.819342  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:35.819495  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:35.861693  452488 cri.go:89] found id: ""
	I0109 00:14:35.861723  452488 logs.go:284] 0 containers: []
	W0109 00:14:35.861732  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:35.861740  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:35.861807  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:35.900886  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:35.900931  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:35.900937  452488 cri.go:89] found id: ""
	I0109 00:14:35.900945  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:35.901005  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.905463  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.910271  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:35.910300  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:36.056761  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:36.056798  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:36.096707  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:36.096739  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:36.555891  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:36.555936  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:36.573167  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:36.573196  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:36.622139  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:36.622169  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:36.680395  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:36.680435  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:36.740350  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:36.740389  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:36.779409  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:36.779443  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:36.837425  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:36.837474  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:36.892724  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:36.892763  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:36.939944  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:36.939979  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:36.999567  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:36.999612  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:36.810729  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.810924  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.713630  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.213516  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.970746  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.468803  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.546015  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:14:39.551932  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:14:39.553444  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:14:39.553469  452488 api_server.go:131] duration metric: took 4.036861283s to wait for apiserver health ...
	I0109 00:14:39.553480  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:14:39.553512  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:39.553592  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:39.597338  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:39.597368  452488 cri.go:89] found id: ""
	I0109 00:14:39.597381  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:39.597450  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.602381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:39.602473  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:39.643738  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:39.643776  452488 cri.go:89] found id: ""
	I0109 00:14:39.643787  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:39.643854  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.649021  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:39.649096  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:39.692903  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:39.692926  452488 cri.go:89] found id: ""
	I0109 00:14:39.692934  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:39.692992  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.697806  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:39.697882  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:39.746679  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:39.746706  452488 cri.go:89] found id: ""
	I0109 00:14:39.746716  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:39.746765  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.752396  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:39.752459  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:39.800438  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:39.800461  452488 cri.go:89] found id: ""
	I0109 00:14:39.800470  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:39.800535  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.805644  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:39.805737  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:39.847341  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:39.847387  452488 cri.go:89] found id: ""
	I0109 00:14:39.847398  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:39.847465  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.851972  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:39.852053  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:39.899183  452488 cri.go:89] found id: ""
	I0109 00:14:39.899219  452488 logs.go:284] 0 containers: []
	W0109 00:14:39.899231  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:39.899239  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:39.899309  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:39.958353  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:39.958395  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:39.958400  452488 cri.go:89] found id: ""
	I0109 00:14:39.958409  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:39.958469  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.963264  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.968827  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:39.968858  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:40.015655  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:40.015685  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:40.161910  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:40.161944  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:40.200197  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:40.200233  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:40.244075  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:40.244119  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:40.655095  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:40.655160  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:40.711957  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:40.712004  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:40.765456  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:40.765503  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:40.824273  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:40.824320  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:40.887213  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:40.887252  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:40.925809  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:40.925842  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:40.967599  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:40.967635  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:41.021163  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:41.021219  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:43.543901  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:14:43.543933  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.543938  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.543943  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.543947  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.543951  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.543955  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.543962  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.543966  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.543974  452488 system_pods.go:74] duration metric: took 3.990487712s to wait for pod list to return data ...
	I0109 00:14:43.543982  452488 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:14:43.547032  452488 default_sa.go:45] found service account: "default"
	I0109 00:14:43.547063  452488 default_sa.go:55] duration metric: took 3.07377ms for default service account to be created ...
	I0109 00:14:43.547075  452488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:14:43.554265  452488 system_pods.go:86] 8 kube-system pods found
	I0109 00:14:43.554305  452488 system_pods.go:89] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.554314  452488 system_pods.go:89] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.554322  452488 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.554329  452488 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.554336  452488 system_pods.go:89] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.554343  452488 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.554356  452488 system_pods.go:89] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.554397  452488 system_pods.go:89] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.554420  452488 system_pods.go:126] duration metric: took 7.336546ms to wait for k8s-apps to be running ...
	I0109 00:14:43.554431  452488 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:14:43.554494  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:43.570839  452488 system_svc.go:56] duration metric: took 16.394034ms WaitForService to wait for kubelet.
	I0109 00:14:43.570874  452488 kubeadm.go:581] duration metric: took 4m18.766870325s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:14:43.570904  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:14:43.575087  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:14:43.575115  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:14:43.575127  452488 node_conditions.go:105] duration metric: took 4.218446ms to run NodePressure ...
	I0109 00:14:43.575139  452488 start.go:228] waiting for startup goroutines ...
	I0109 00:14:43.575145  452488 start.go:233] waiting for cluster config update ...
	I0109 00:14:43.575154  452488 start.go:242] writing updated cluster config ...
	I0109 00:14:43.575452  452488 ssh_runner.go:195] Run: rm -f paused
	I0109 00:14:43.636407  452488 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:14:43.638597  452488 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-834116" cluster and "default" namespace by default
	I0109 00:14:40.814426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.310989  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.214186  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.714118  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.968087  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.968943  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.809788  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:47.810189  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:46.213897  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.714327  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.716636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.472384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.473405  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.310188  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.311048  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.803108  452237 pod_ready.go:81] duration metric: took 4m0.001087466s waiting for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:52.803148  452237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:14:52.803179  452237 pod_ready.go:38] duration metric: took 4m43.413410939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:52.803217  452237 kubeadm.go:640] restartCluster took 5m4.419560589s
	W0109 00:14:52.803342  452237 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:14:52.803433  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:14:53.213308  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.215229  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.972718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.470546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.714170  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:00.213742  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.968558  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:59.969971  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:01.970573  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:02.713539  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:05.213339  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:04.470909  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:06.976278  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:07.153986  452237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.350512063s)
	I0109 00:15:07.154091  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:07.169206  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:07.180120  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:07.190689  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:07.190746  452237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:15:07.249723  452237 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0109 00:15:07.249803  452237 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:07.413454  452237 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:07.413648  452237 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:07.413809  452237 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:07.666677  452237 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:07.668620  452237 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:07.668736  452237 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:07.668869  452237 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:07.669044  452237 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:07.669122  452237 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:07.669206  452237 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:07.669265  452237 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:07.669338  452237 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:07.669409  452237 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:07.669493  452237 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:07.669587  452237 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:07.669632  452237 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:07.669698  452237 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:07.892774  452237 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:08.387341  452237 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0109 00:15:08.697850  452237 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:09.110380  452237 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:09.182970  452237 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:09.183625  452237 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:09.186350  452237 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:09.188402  452237 out.go:204]   - Booting up control plane ...
	I0109 00:15:09.188494  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:09.188620  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:09.190877  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:09.210069  452237 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:09.213806  452237 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:09.214168  452237 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:15:09.348180  452237 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:07.713522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:10.212932  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:09.468413  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:11.472366  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:12.214158  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:14.713831  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:13.968332  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:15.970174  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:17.853084  452237 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502974 seconds
	I0109 00:15:17.871025  452237 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:17.897430  452237 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:18.444483  452237 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:18.444785  452237 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-378213 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:15:18.959611  452237 kubeadm.go:322] [bootstrap-token] Using token: dhjf8u.939ptni0q22ypfw8
	I0109 00:15:18.961445  452237 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:18.961621  452237 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:18.976769  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:15:18.986315  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:18.991512  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:18.996317  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:19.001219  452237 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:19.018739  452237 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:15:19.300703  452237 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:19.384320  452237 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:19.385524  452237 kubeadm.go:322] 
	I0109 00:15:19.385609  452237 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:19.385646  452237 kubeadm.go:322] 
	I0109 00:15:19.385746  452237 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:19.385759  452237 kubeadm.go:322] 
	I0109 00:15:19.385780  452237 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:19.385851  452237 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:19.385894  452237 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:19.385902  452237 kubeadm.go:322] 
	I0109 00:15:19.385976  452237 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:15:19.385984  452237 kubeadm.go:322] 
	I0109 00:15:19.386052  452237 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:15:19.386063  452237 kubeadm.go:322] 
	I0109 00:15:19.386140  452237 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:19.386255  452237 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:19.386338  452237 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:19.386348  452237 kubeadm.go:322] 
	I0109 00:15:19.386445  452237 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:15:19.386563  452237 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:19.386588  452237 kubeadm.go:322] 
	I0109 00:15:19.386704  452237 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.386865  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:19.386893  452237 kubeadm.go:322] 	--control-plane 
	I0109 00:15:19.386900  452237 kubeadm.go:322] 
	I0109 00:15:19.387013  452237 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:19.387023  452237 kubeadm.go:322] 
	I0109 00:15:19.387156  452237 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.387306  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:19.388274  452237 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:19.388386  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:15:19.388404  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:19.390641  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:19.392729  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:19.420375  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:19.480953  452237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:19.481036  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.481070  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=no-preload-378213 minikube.k8s.io/updated_at=2024_01_09T00_15_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.529444  452237 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:19.828947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:17.214395  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:19.714562  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:18.467657  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.469306  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.329278  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:20.829730  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.329756  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.829370  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.329549  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.829161  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.329937  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.829891  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.329077  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.829276  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.715433  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.214554  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:22.469602  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.968838  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:25.329025  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:25.829279  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.329947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.829794  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.329030  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.829080  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.329613  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.829372  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.329826  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.829063  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.712393  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:28.715010  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:30.329991  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:30.829320  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.329115  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.423331  452237 kubeadm.go:1088] duration metric: took 11.942366757s to wait for elevateKubeSystemPrivileges.
	I0109 00:15:31.423377  452237 kubeadm.go:406] StartCluster complete in 5m43.086225729s
	I0109 00:15:31.423405  452237 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.423510  452237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:15:31.425917  452237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.426178  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:15:31.426284  452237 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:15:31.426369  452237 addons.go:69] Setting storage-provisioner=true in profile "no-preload-378213"
	I0109 00:15:31.426384  452237 addons.go:69] Setting default-storageclass=true in profile "no-preload-378213"
	I0109 00:15:31.426397  452237 addons.go:237] Setting addon storage-provisioner=true in "no-preload-378213"
	W0109 00:15:31.426409  452237 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:15:31.426432  452237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-378213"
	I0109 00:15:31.426447  452237 addons.go:69] Setting metrics-server=true in profile "no-preload-378213"
	I0109 00:15:31.426476  452237 addons.go:237] Setting addon metrics-server=true in "no-preload-378213"
	W0109 00:15:31.426484  452237 addons.go:246] addon metrics-server should already be in state true
	I0109 00:15:31.426485  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426540  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426434  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:15:31.426891  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426918  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426927  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426931  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.446291  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0109 00:15:31.446423  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0109 00:15:31.446819  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0109 00:15:31.447018  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.447639  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.447724  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447854  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.448095  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448259  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448288  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448354  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.448439  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448465  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448921  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448997  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.449699  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449744  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.449757  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449785  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.452784  452237 addons.go:237] Setting addon default-storageclass=true in "no-preload-378213"
	W0109 00:15:31.452809  452237 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:15:31.452841  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.454376  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.454416  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.467638  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0109 00:15:31.468325  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.468901  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.468921  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.469339  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.469563  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.471409  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.473329  452237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:15:31.474680  452237 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.474693  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:15:31.474706  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.473604  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0109 00:15:31.474062  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0109 00:15:31.475095  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475399  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.475627  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.475979  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.476163  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.477959  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.479656  452237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:15:31.478629  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.479280  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.479557  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.480974  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.481058  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:15:31.481066  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:15:31.481079  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.481110  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.481128  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.481308  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.481878  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.482384  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.483085  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.483645  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.483668  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.484708  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485095  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.485117  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485318  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.487608  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.487807  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.487999  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.499347  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0109 00:15:31.499913  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.500547  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.500570  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.500917  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.501145  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.503016  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.503296  452237 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.503310  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:15:31.503325  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.506091  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506397  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.506455  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506652  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.506831  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.506978  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.507091  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.624782  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:15:31.642826  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.663296  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.710300  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:15:31.710330  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:15:31.787478  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:15:31.787517  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:15:31.871349  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:31.871407  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:15:31.968192  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:32.072474  452237 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-378213" context rescaled to 1 replicas
	I0109 00:15:32.072532  452237 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:15:32.074625  452237 out.go:177] * Verifying Kubernetes components...
	I0109 00:15:27.468923  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:29.971742  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:32.075944  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:32.439632  452237 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0109 00:15:32.439722  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.439751  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440089  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440193  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.440209  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.440219  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440166  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440559  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440571  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440580  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.497313  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.497346  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.497717  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.497747  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901192  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.237846158s)
	I0109 00:15:32.901262  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901276  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901654  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.901703  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901719  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901730  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901662  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902029  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902069  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.902079  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030220  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.061947007s)
	I0109 00:15:33.030237  452237 node_ready.go:35] waiting up to 6m0s for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.030290  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030308  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.030694  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.030714  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030725  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030734  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.031003  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.031022  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.031034  452237 addons.go:473] Verifying addon metrics-server=true in "no-preload-378213"
	I0109 00:15:33.032849  452237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:15:33.034106  452237 addons.go:508] enable addons completed in 1.60782305s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:15:33.044548  452237 node_ready.go:49] node "no-preload-378213" has status "Ready":"True"
	I0109 00:15:33.044577  452237 node_ready.go:38] duration metric: took 14.31045ms waiting for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.044592  452237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.060577  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:34.066536  452237 pod_ready.go:97] error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066570  452237 pod_ready.go:81] duration metric: took 1.005962139s waiting for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:34.066584  452237 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066594  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:31.213050  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:33.206836  451943 pod_ready.go:81] duration metric: took 4m0.000952779s waiting for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:33.206864  451943 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:15:33.206884  451943 pod_ready.go:38] duration metric: took 4m1.199765303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.206916  451943 kubeadm.go:640] restartCluster took 5m9.054273444s
	W0109 00:15:33.206995  451943 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:15:33.207029  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:15:32.469904  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:34.969702  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:36.074768  452237 pod_ready.go:92] pod "coredns-76f75df574-ztvgr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.074793  452237 pod_ready.go:81] duration metric: took 2.008191718s waiting for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.074803  452237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080586  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.080610  452237 pod_ready.go:81] duration metric: took 5.80009ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080623  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.085972  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.085995  452237 pod_ready.go:81] duration metric: took 5.365045ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.086004  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091275  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.091295  452237 pod_ready.go:81] duration metric: took 5.284302ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091306  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095919  452237 pod_ready.go:92] pod "kube-proxy-4vnf5" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.095938  452237 pod_ready.go:81] duration metric: took 4.624685ms waiting for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095949  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471021  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.471051  452237 pod_ready.go:81] duration metric: took 375.093915ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471066  452237 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:38.478891  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.932714  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.725641704s)
	I0109 00:15:39.932824  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:39.949655  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:39.967317  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:39.983553  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:39.983602  451943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0109 00:15:40.196509  451943 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:37.468440  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.468561  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:41.468728  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:40.481038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:42.979928  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:43.468928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.968791  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.479525  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.981785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:49.988192  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.970158  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:50.469209  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.798385  451943 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0109 00:15:53.798458  451943 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:53.798557  451943 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:53.798719  451943 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:53.798863  451943 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:53.799001  451943 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:53.799122  451943 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:53.799199  451943 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0109 00:15:53.799296  451943 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:53.800918  451943 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:53.801030  451943 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:53.801108  451943 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:53.801199  451943 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:53.801284  451943 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:53.801342  451943 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:53.801386  451943 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:53.801441  451943 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:53.801491  451943 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:53.801563  451943 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:53.801654  451943 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:53.801710  451943 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:53.801776  451943 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:53.801841  451943 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:53.801885  451943 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:53.801935  451943 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:53.802013  451943 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:53.802097  451943 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:53.803572  451943 out.go:204]   - Booting up control plane ...
	I0109 00:15:53.803682  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:53.803757  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:53.803811  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:53.803932  451943 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:53.804150  451943 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:53.804251  451943 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506007 seconds
	I0109 00:15:53.804388  451943 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:53.804541  451943 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:53.804628  451943 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:53.804832  451943 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-003293 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0109 00:15:53.804900  451943 kubeadm.go:322] [bootstrap-token] Using token: 4iop3a.ft6ghwlgcg45v9u4
	I0109 00:15:53.806501  451943 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:53.806592  451943 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:53.806724  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:53.806832  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:53.806959  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:53.807033  451943 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:53.807071  451943 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:53.807109  451943 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:53.807115  451943 kubeadm.go:322] 
	I0109 00:15:53.807175  451943 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:53.807199  451943 kubeadm.go:322] 
	I0109 00:15:53.807319  451943 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:53.807328  451943 kubeadm.go:322] 
	I0109 00:15:53.807353  451943 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:53.807457  451943 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:53.807531  451943 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:53.807541  451943 kubeadm.go:322] 
	I0109 00:15:53.807594  451943 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:53.807668  451943 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:53.807746  451943 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:53.807766  451943 kubeadm.go:322] 
	I0109 00:15:53.807884  451943 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0109 00:15:53.807989  451943 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:53.807998  451943 kubeadm.go:322] 
	I0109 00:15:53.808083  451943 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808215  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:53.808267  451943 kubeadm.go:322]     --control-plane 	  
	I0109 00:15:53.808282  451943 kubeadm.go:322] 
	I0109 00:15:53.808416  451943 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:53.808431  451943 kubeadm.go:322] 
	I0109 00:15:53.808535  451943 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808635  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:53.808646  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:15:53.808655  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:53.810445  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:52.478401  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.478468  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.812384  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:53.822034  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:53.841918  451943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:53.842007  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.842023  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=old-k8s-version-003293 minikube.k8s.io/updated_at=2024_01_09T00_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.878580  451943 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:54.119184  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:54.619596  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.119468  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.619508  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:52.969233  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.969384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.969570  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.978217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:59.478428  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.119299  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:56.620179  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.119526  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.619985  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.119330  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.619572  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.120142  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.619498  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.119329  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.620206  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.468767  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.969313  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.978314  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:03.979583  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.120279  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:01.619668  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.620169  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.120249  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.619563  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.619912  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.120243  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.620114  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.971649  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.468683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:05.980829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:08.479315  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.119938  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:06.619543  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.119220  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.619392  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.119991  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.619517  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.120205  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.620121  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.119909  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.273872  451943 kubeadm.go:1088] duration metric: took 16.431936842s to wait for elevateKubeSystemPrivileges.
	I0109 00:16:10.273910  451943 kubeadm.go:406] StartCluster complete in 5m46.185018744s
	I0109 00:16:10.273961  451943 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.274054  451943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:16:10.275851  451943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.276124  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:16:10.276261  451943 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:16:10.276362  451943 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276373  451943 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276388  451943 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-003293"
	I0109 00:16:10.276394  451943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-003293"
	I0109 00:16:10.276390  451943 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276415  451943 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-003293"
	W0109 00:16:10.276428  451943 addons.go:246] addon metrics-server should already be in state true
	I0109 00:16:10.276454  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:16:10.276481  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	W0109 00:16:10.276397  451943 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:16:10.276544  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.276864  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276880  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276867  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276941  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.276955  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.277062  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.294099  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I0109 00:16:10.294268  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0109 00:16:10.294410  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0109 00:16:10.294718  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294768  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294925  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.295279  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295305  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295388  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295419  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295397  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295480  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295693  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295769  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295788  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.296012  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.296310  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.296357  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.297119  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.297171  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.299887  451943 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-003293"
	W0109 00:16:10.299910  451943 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:16:10.299946  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.300224  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.300263  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.313007  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0109 00:16:10.313533  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.314010  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.314026  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.314437  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.314622  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.315598  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0109 00:16:10.316247  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.316532  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.318734  451943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:16:10.317343  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.317379  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0109 00:16:10.320285  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:16:10.320308  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:16:10.320329  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.320333  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.320705  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.320963  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.321103  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.321233  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.321247  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.321761  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.322210  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.322242  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.323835  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.324029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.324152  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.324177  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.326057  451943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:16:10.324406  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.328066  451943 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.328087  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:16:10.328096  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.328124  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.328784  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.329014  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.331395  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.331785  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.331810  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.332001  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.332191  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.332335  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.332480  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.347123  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0109 00:16:10.347716  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.348691  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.348719  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.349127  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.349342  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.350834  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.351133  451943 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.351149  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:16:10.351168  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.354189  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354621  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.354668  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354909  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.355119  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.355294  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.355481  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.515777  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.534034  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:16:10.534064  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:16:10.554850  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.584934  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:16:10.584964  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:16:10.615671  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:16:10.637303  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.637339  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:16:10.680679  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.830403  451943 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-003293" context rescaled to 1 replicas
	I0109 00:16:10.830449  451943 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:16:10.832633  451943 out.go:177] * Verifying Kubernetes components...
	I0109 00:16:10.834172  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:16:11.515705  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.515738  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516087  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.516123  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516132  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.516141  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.516151  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516389  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516407  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.571488  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.571524  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.571880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.571890  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.571911  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630216  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075317719s)
	I0109 00:16:11.630282  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630297  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.630308  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.014587881s)
	I0109 00:16:11.630345  451943 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0109 00:16:11.630710  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.630729  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630740  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.630751  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.631004  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.631032  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.631153  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.716276  451943 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.716463  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0357366s)
	I0109 00:16:11.716513  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716534  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.716848  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.716869  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.716878  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716889  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.717212  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.717222  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.717228  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.717245  451943 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-003293"
	I0109 00:16:11.719193  451943 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:16:08.968622  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.470234  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:10.479812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:12.984384  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.720570  451943 addons.go:508] enable addons completed in 1.44432074s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:16:11.733736  451943 node_ready.go:49] node "old-k8s-version-003293" has status "Ready":"True"
	I0109 00:16:11.733767  451943 node_ready.go:38] duration metric: took 17.451191ms waiting for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.733787  451943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:11.750301  451943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:13.762510  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:13.969774  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.468912  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:15.481249  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:17.978744  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:19.979938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.257523  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.259142  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.757454  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.469229  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.469761  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:22.478368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.978345  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:21.256765  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.256797  451943 pod_ready.go:81] duration metric: took 9.506455286s waiting for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.256807  451943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262633  451943 pod_ready.go:92] pod "kube-proxy-h8br2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.262651  451943 pod_ready.go:81] duration metric: took 5.836717ms waiting for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262660  451943 pod_ready.go:38] duration metric: took 9.52886361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:21.262697  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:16:21.262758  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:16:21.280249  451943 api_server.go:72] duration metric: took 10.449767566s to wait for apiserver process to appear ...
	I0109 00:16:21.280282  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:16:21.280305  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:16:21.286759  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:16:21.287885  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:16:21.287913  451943 api_server.go:131] duration metric: took 7.622726ms to wait for apiserver health ...
	I0109 00:16:21.287924  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:16:21.292745  451943 system_pods.go:59] 4 kube-system pods found
	I0109 00:16:21.292774  451943 system_pods.go:61] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.292782  451943 system_pods.go:61] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.292792  451943 system_pods.go:61] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.292799  451943 system_pods.go:61] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.292809  451943 system_pods.go:74] duration metric: took 4.87707ms to wait for pod list to return data ...
	I0109 00:16:21.292817  451943 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:16:21.295463  451943 default_sa.go:45] found service account: "default"
	I0109 00:16:21.295486  451943 default_sa.go:55] duration metric: took 2.661749ms for default service account to be created ...
	I0109 00:16:21.295495  451943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:16:21.299334  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.299369  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.299379  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.299389  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.299401  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.299419  451943 retry.go:31] will retry after 262.555966ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.567416  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.567444  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.567449  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.567456  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.567461  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.567483  451943 retry.go:31] will retry after 296.862413ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.869873  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.869910  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.869919  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.869932  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.869939  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.869960  451943 retry.go:31] will retry after 354.537219ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.229945  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.229973  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.229978  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.229985  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.229990  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.230008  451943 retry.go:31] will retry after 403.317754ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.639068  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.639100  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.639106  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.639115  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.639122  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.639145  451943 retry.go:31] will retry after 548.96975ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:23.193832  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:23.193865  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:23.193874  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:23.193884  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:23.193891  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:23.193912  451943 retry.go:31] will retry after 808.39734ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:24.007761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:24.007789  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:24.007794  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:24.007800  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:24.007805  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:24.007826  451943 retry.go:31] will retry after 1.084893616s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:25.097415  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:25.097446  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:25.097452  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:25.097461  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:25.097468  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:25.097488  451943 retry.go:31] will retry after 1.364718688s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.471347  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.968309  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.968540  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.981321  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:28.981763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.469277  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:26.469302  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:26.469308  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:26.469314  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:26.469319  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:26.469336  451943 retry.go:31] will retry after 1.608197445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.083522  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:28.083549  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:28.083554  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:28.083561  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:28.083566  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:28.083584  451943 retry.go:31] will retry after 1.803084046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:29.892783  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:29.892825  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:29.892834  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:29.892845  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:29.892852  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:29.892878  451943 retry.go:31] will retry after 2.500544298s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.970772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:30.972069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:31.478822  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:33.481537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:32.406761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:32.406791  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:32.406796  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:32.406803  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:32.406808  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:32.406826  451943 retry.go:31] will retry after 3.245901502s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:35.657591  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:35.657630  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:35.657636  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:35.657644  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:35.657650  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:35.657669  451943 retry.go:31] will retry after 2.987638992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:33.468927  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.968669  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.979914  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:37.982358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:38.652562  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:38.652589  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:38.652594  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:38.652600  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:38.652605  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:38.652621  451943 retry.go:31] will retry after 5.12035072s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:38.469167  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.469783  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.481402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:42.980559  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:43.778329  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:43.778358  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:43.778363  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:43.778370  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:43.778375  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:43.778392  451943 retry.go:31] will retry after 5.3812896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:42.972242  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.468157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.479217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:47.978368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.978994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.165092  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:49.165124  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:49.165129  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:49.165136  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:49.165142  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:49.165161  451943 retry.go:31] will retry after 8.788078847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:47.469586  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.968667  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.969102  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.979785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:53.984069  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:54.467285  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.469141  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.478629  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:58.479207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:57.958448  451943 system_pods.go:86] 5 kube-system pods found
	I0109 00:16:57.958475  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:57.958481  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Pending
	I0109 00:16:57.958485  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:57.958492  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:57.958497  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:57.958515  451943 retry.go:31] will retry after 8.563711001s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:58.470664  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.970608  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.481608  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:02.978829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:03.468919  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.469064  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.482545  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:07.979446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:06.528938  451943 system_pods.go:86] 6 kube-system pods found
	I0109 00:17:06.528963  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:06.528969  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:06.528973  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:06.528977  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:06.528987  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:06.528994  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:06.529016  451943 retry.go:31] will retry after 11.544909303s: missing components: etcd, kube-apiserver
	I0109 00:17:07.969131  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:09.969180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:10.479061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.480724  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.978853  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.468823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.469027  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:16.968659  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.081528  451943 system_pods.go:86] 8 kube-system pods found
	I0109 00:17:18.081568  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:18.081576  451943 system_pods.go:89] "etcd-old-k8s-version-003293" [f4516e0b-a960-4dc1-85c3-ae8197ded761] Running
	I0109 00:17:18.081583  451943 system_pods.go:89] "kube-apiserver-old-k8s-version-003293" [c5e83fe4-e95d-47ec-86a4-0615095ef746] Running
	I0109 00:17:18.081590  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:18.081596  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:18.081603  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:18.081613  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:18.081622  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:18.081636  451943 system_pods.go:126] duration metric: took 56.786133323s to wait for k8s-apps to be running ...
	I0109 00:17:18.081651  451943 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:17:18.081726  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:17:18.103798  451943 system_svc.go:56] duration metric: took 22.127635ms WaitForService to wait for kubelet.
	I0109 00:17:18.103844  451943 kubeadm.go:581] duration metric: took 1m7.273361806s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:17:18.103879  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:17:18.107740  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:17:18.107768  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:17:18.107803  451943 node_conditions.go:105] duration metric: took 3.918349ms to run NodePressure ...
	I0109 00:17:18.107814  451943 start.go:228] waiting for startup goroutines ...
	I0109 00:17:18.107826  451943 start.go:233] waiting for cluster config update ...
	I0109 00:17:18.107838  451943 start.go:242] writing updated cluster config ...
	I0109 00:17:18.108179  451943 ssh_runner.go:195] Run: rm -f paused
	I0109 00:17:18.161701  451943 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0109 00:17:18.163722  451943 out.go:177] 
	W0109 00:17:18.165269  451943 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0109 00:17:18.166781  451943 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0109 00:17:18.168422  451943 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-003293" cluster and "default" namespace by default
	I0109 00:17:16.980679  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:19.480507  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.969475  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.471739  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.978721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:24.478734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:23.968125  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:25.968375  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:26.483938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:28.979405  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:27.969238  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:29.969349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.973290  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.479085  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:33.978966  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:34.469294  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.967991  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.478328  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.481642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.970055  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:41.468509  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:40.978336  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:42.979499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:44.980394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:43.471069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:45.969083  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:47.479177  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:49.483109  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:48.469215  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:50.970448  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:51.979138  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:54.479275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:53.469152  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:55.470554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:56.480333  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:58.980818  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:57.968358  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:59.968498  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:01.485721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:03.980131  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:02.468272  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:04.469640  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:06.970010  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:05.981218  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:08.478827  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:09.469651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:11.970360  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:10.979972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:12.980174  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:14.470845  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:16.969297  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:15.479585  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:17.979035  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.979874  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.471447  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:21.473866  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:22.479239  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:24.979662  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:23.969077  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:26.469232  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:27.480054  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:29.978803  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:28.470397  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:30.968399  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:31.979175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:33.982180  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:32.467688  451984 pod_ready.go:81] duration metric: took 4m0.007315063s waiting for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	E0109 00:18:32.467715  451984 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:18:32.467724  451984 pod_ready.go:38] duration metric: took 4m2.010477321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:18:32.467740  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:18:32.467770  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:32.467841  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:32.540539  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.540568  451984 cri.go:89] found id: ""
	I0109 00:18:32.540578  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:32.540633  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.547617  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:32.547712  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:32.593446  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:32.593548  451984 cri.go:89] found id: ""
	I0109 00:18:32.593566  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:32.593622  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.598538  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:32.598630  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:32.641182  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.641217  451984 cri.go:89] found id: ""
	I0109 00:18:32.641227  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:32.641281  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.645529  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:32.645610  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:32.687187  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:32.687222  451984 cri.go:89] found id: ""
	I0109 00:18:32.687233  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:32.687299  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.691477  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:32.691551  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:32.730800  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:32.730834  451984 cri.go:89] found id: ""
	I0109 00:18:32.730853  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:32.730914  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.735372  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:32.735458  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:32.779326  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:32.779355  451984 cri.go:89] found id: ""
	I0109 00:18:32.779384  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:32.779528  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.784366  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:32.784444  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:32.825533  451984 cri.go:89] found id: ""
	I0109 00:18:32.825566  451984 logs.go:284] 0 containers: []
	W0109 00:18:32.825577  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:32.825586  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:32.825657  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:32.871429  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:32.871465  451984 cri.go:89] found id: ""
	I0109 00:18:32.871478  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:32.871546  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.876454  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:32.876483  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.931470  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:32.931518  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.976305  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:32.976344  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:33.421205  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:33.421256  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:33.436706  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:33.436752  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:33.605332  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:33.605369  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:33.653704  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:33.653746  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:33.697440  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:33.697489  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:33.753681  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:33.753728  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:33.798230  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:33.798271  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:33.862054  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:33.862089  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:33.942360  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:33.942549  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:33.965458  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:33.965503  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:34.012430  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012465  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:34.012554  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:34.012575  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:34.012583  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:34.012590  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012596  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:36.480501  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:38.979625  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:41.480903  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:43.978879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:44.014441  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:18:44.031831  451984 api_server.go:72] duration metric: took 4m15.676282348s to wait for apiserver process to appear ...
	I0109 00:18:44.031865  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:18:44.031906  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:44.031966  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:44.077138  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.077163  451984 cri.go:89] found id: ""
	I0109 00:18:44.077172  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:44.077232  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.081831  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:44.081906  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:44.121451  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.121474  451984 cri.go:89] found id: ""
	I0109 00:18:44.121482  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:44.121535  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.126070  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:44.126158  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:44.170657  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:44.170690  451984 cri.go:89] found id: ""
	I0109 00:18:44.170699  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:44.170753  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.175896  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:44.175977  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:44.220851  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.220877  451984 cri.go:89] found id: ""
	I0109 00:18:44.220886  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:44.220937  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.225006  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:44.225094  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:44.270073  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.270107  451984 cri.go:89] found id: ""
	I0109 00:18:44.270118  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:44.270188  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.275153  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:44.275245  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:44.318077  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:44.318111  451984 cri.go:89] found id: ""
	I0109 00:18:44.318122  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:44.318201  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.322475  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:44.322560  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:44.361736  451984 cri.go:89] found id: ""
	I0109 00:18:44.361773  451984 logs.go:284] 0 containers: []
	W0109 00:18:44.361784  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:44.361792  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:44.361864  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:44.404699  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:44.404726  451984 cri.go:89] found id: ""
	I0109 00:18:44.404737  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:44.404803  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.408753  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:44.408777  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.455119  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:44.455162  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.497680  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:44.497721  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:44.548809  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:44.548841  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:44.628959  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:44.629159  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:44.651315  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:44.651388  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:44.666013  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:44.666055  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.716269  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:44.716317  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.762681  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:44.762720  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:45.136682  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:45.136743  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:45.274971  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:45.275023  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:45.323164  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:45.323208  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:45.383823  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:45.383881  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:45.428483  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428516  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:45.428571  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:45.428579  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:45.428588  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:45.428601  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428608  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:45.980484  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:48.483446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:50.980210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:53.480495  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:55.429277  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:18:55.436812  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:18:55.438287  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:18:55.438316  451984 api_server.go:131] duration metric: took 11.40644287s to wait for apiserver health ...
	I0109 00:18:55.438327  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:18:55.438359  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:55.438433  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:55.485627  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:55.485654  451984 cri.go:89] found id: ""
	I0109 00:18:55.485664  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:55.485732  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.490219  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:55.490296  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:55.531890  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:55.531920  451984 cri.go:89] found id: ""
	I0109 00:18:55.531930  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:55.532002  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.536651  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:55.536724  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:55.579859  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:55.579909  451984 cri.go:89] found id: ""
	I0109 00:18:55.579921  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:55.579981  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.584894  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:55.584970  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:55.626833  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:55.626861  451984 cri.go:89] found id: ""
	I0109 00:18:55.626871  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:55.626940  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.631334  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:55.631449  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:55.675805  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:55.675831  451984 cri.go:89] found id: ""
	I0109 00:18:55.675843  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:55.675907  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.680727  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:55.680805  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:55.734757  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:55.734788  451984 cri.go:89] found id: ""
	I0109 00:18:55.734799  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:55.734867  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.739390  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:55.739464  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:55.785683  451984 cri.go:89] found id: ""
	I0109 00:18:55.785720  451984 logs.go:284] 0 containers: []
	W0109 00:18:55.785733  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:55.785741  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:55.785815  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:55.839983  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:55.840010  451984 cri.go:89] found id: ""
	I0109 00:18:55.840018  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:55.840066  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.844870  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:55.844897  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:55.979554  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:55.979600  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:56.023796  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:56.023840  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:56.070463  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:56.070512  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:56.116109  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:56.116142  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:56.505693  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:56.505742  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:56.566638  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:56.566683  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:56.649199  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.649372  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.670766  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:56.670809  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:56.719532  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:56.719574  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:56.763714  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:56.763758  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:56.825271  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:56.825324  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:56.869669  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:56.869717  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:56.890240  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890274  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:56.890355  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:56.890385  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.890395  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.890406  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890415  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:55.481178  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:57.979207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:59.980319  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:02.478816  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:04.478919  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:06.899277  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:19:06.899321  451984 system_pods.go:61] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.899329  451984 system_pods.go:61] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.899334  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.899338  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.899348  451984 system_pods.go:61] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.899352  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.899383  451984 system_pods.go:61] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.899395  451984 system_pods.go:61] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.899414  451984 system_pods.go:74] duration metric: took 11.461075857s to wait for pod list to return data ...
	I0109 00:19:06.899429  451984 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:19:06.903404  451984 default_sa.go:45] found service account: "default"
	I0109 00:19:06.903436  451984 default_sa.go:55] duration metric: took 3.995992ms for default service account to be created ...
	I0109 00:19:06.903448  451984 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:19:06.910497  451984 system_pods.go:86] 8 kube-system pods found
	I0109 00:19:06.910523  451984 system_pods.go:89] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.910528  451984 system_pods.go:89] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.910533  451984 system_pods.go:89] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.910537  451984 system_pods.go:89] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.910541  451984 system_pods.go:89] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.910545  451984 system_pods.go:89] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.910553  451984 system_pods.go:89] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.910558  451984 system_pods.go:89] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.910564  451984 system_pods.go:126] duration metric: took 7.110675ms to wait for k8s-apps to be running ...
	I0109 00:19:06.910571  451984 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:19:06.910616  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:19:06.927621  451984 system_svc.go:56] duration metric: took 17.036468ms WaitForService to wait for kubelet.
	I0109 00:19:06.927654  451984 kubeadm.go:581] duration metric: took 4m38.572113328s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:19:06.927677  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:19:06.931040  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:19:06.931071  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:19:06.931083  451984 node_conditions.go:105] duration metric: took 3.401351ms to run NodePressure ...
	I0109 00:19:06.931095  451984 start.go:228] waiting for startup goroutines ...
	I0109 00:19:06.931101  451984 start.go:233] waiting for cluster config update ...
	I0109 00:19:06.931113  451984 start.go:242] writing updated cluster config ...
	I0109 00:19:06.931454  451984 ssh_runner.go:195] Run: rm -f paused
	I0109 00:19:06.989366  451984 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:19:06.991673  451984 out.go:177] * Done! kubectl is now configured to use "embed-certs-845373" cluster and "default" namespace by default
	I0109 00:19:06.479508  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:08.978313  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:11.482400  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:13.979056  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:16.480908  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:18.481024  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:20.482252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:22.978703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:24.979574  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:26.979620  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:29.478426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:31.478540  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:33.478901  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:35.978875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:36.471149  452237 pod_ready.go:81] duration metric: took 4m0.000060952s waiting for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	E0109 00:19:36.471203  452237 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:19:36.471221  452237 pod_ready.go:38] duration metric: took 4m3.426617855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:19:36.471243  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:19:36.471314  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:36.471400  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:36.539330  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:36.539370  452237 cri.go:89] found id: ""
	I0109 00:19:36.539383  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:36.539446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.544259  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:36.544339  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:36.591395  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:36.591437  452237 cri.go:89] found id: ""
	I0109 00:19:36.591448  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:36.591520  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.596454  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:36.596523  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:36.641041  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:36.641070  452237 cri.go:89] found id: ""
	I0109 00:19:36.641082  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:36.641145  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.645716  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:36.645798  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:36.686577  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:36.686607  452237 cri.go:89] found id: ""
	I0109 00:19:36.686618  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:36.686686  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.690744  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:36.690824  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:36.733504  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:36.733534  452237 cri.go:89] found id: ""
	I0109 00:19:36.733544  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:36.733613  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.738581  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:36.738663  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:36.783280  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:36.783314  452237 cri.go:89] found id: ""
	I0109 00:19:36.783326  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:36.783419  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.788101  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:36.788171  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:36.839094  452237 cri.go:89] found id: ""
	I0109 00:19:36.839124  452237 logs.go:284] 0 containers: []
	W0109 00:19:36.839133  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:36.839139  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:36.839201  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:36.880203  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:36.880236  452237 cri.go:89] found id: ""
	I0109 00:19:36.880247  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:36.880329  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.884703  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:36.884732  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:36.900132  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:36.900175  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:37.044558  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:37.044596  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:37.090555  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:37.090601  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:37.550107  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:37.550164  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:37.608267  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:37.608316  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:37.689186  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:37.689447  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:37.712896  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:37.712958  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:37.766035  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:37.766078  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:37.814072  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:37.814111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:37.858686  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:37.858725  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:37.912616  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:37.912661  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:37.973080  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:37.973129  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:38.016941  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.016989  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:38.017072  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:38.017088  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:38.017101  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:38.017118  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.017128  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:48.018753  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:19:48.040302  452237 api_server.go:72] duration metric: took 4m15.967717255s to wait for apiserver process to appear ...
	I0109 00:19:48.040335  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:19:48.040382  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:48.040539  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:48.105058  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.105084  452237 cri.go:89] found id: ""
	I0109 00:19:48.105095  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:48.105158  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.110067  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:48.110165  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:48.153350  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.153383  452237 cri.go:89] found id: ""
	I0109 00:19:48.153394  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:48.153464  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.158284  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:48.158355  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:48.205447  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.205480  452237 cri.go:89] found id: ""
	I0109 00:19:48.205492  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:48.205572  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.210254  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:48.210353  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:48.253594  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:48.253624  452237 cri.go:89] found id: ""
	I0109 00:19:48.253633  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:48.253700  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.259160  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:48.259229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:48.302358  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.302383  452237 cri.go:89] found id: ""
	I0109 00:19:48.302393  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:48.302446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.308134  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:48.308229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:48.349632  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:48.349656  452237 cri.go:89] found id: ""
	I0109 00:19:48.349664  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:48.349715  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.354626  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:48.354693  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:48.400501  452237 cri.go:89] found id: ""
	I0109 00:19:48.400535  452237 logs.go:284] 0 containers: []
	W0109 00:19:48.400547  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:48.400555  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:48.400626  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:48.444607  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:48.444631  452237 cri.go:89] found id: ""
	I0109 00:19:48.444641  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:48.444710  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.448965  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:48.449000  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:48.496050  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:48.496085  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:48.620778  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:48.620812  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.688155  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:48.688204  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.745755  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:48.745792  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.786141  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:48.786195  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.833422  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:48.833456  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:49.231467  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:49.231508  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:49.315139  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.315313  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.337901  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:49.337942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:49.353452  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:49.353494  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:49.409069  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:49.409111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:49.466267  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:49.466311  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:49.512720  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512762  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:49.512838  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:49.512858  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.512868  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.512882  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512891  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:59.513828  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:19:59.518896  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:19:59.520439  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:19:59.520463  452237 api_server.go:131] duration metric: took 11.480122148s to wait for apiserver health ...
	I0109 00:19:59.520479  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:19:59.520504  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:59.520549  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:59.566636  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:59.566669  452237 cri.go:89] found id: ""
	I0109 00:19:59.566680  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:59.566773  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.570754  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:59.570817  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:59.612286  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:59.612314  452237 cri.go:89] found id: ""
	I0109 00:19:59.612326  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:59.612399  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.618705  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:59.618778  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:59.666381  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:59.666408  452237 cri.go:89] found id: ""
	I0109 00:19:59.666417  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:59.666468  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.672155  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:59.672242  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:59.712973  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:59.712997  452237 cri.go:89] found id: ""
	I0109 00:19:59.713005  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:59.713068  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.717181  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:59.717261  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:59.762121  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:59.762153  452237 cri.go:89] found id: ""
	I0109 00:19:59.762163  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:59.762236  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.766573  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:59.766630  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:59.812202  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:59.812233  452237 cri.go:89] found id: ""
	I0109 00:19:59.812246  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:59.812309  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.817529  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:59.817615  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:59.865373  452237 cri.go:89] found id: ""
	I0109 00:19:59.865402  452237 logs.go:284] 0 containers: []
	W0109 00:19:59.865410  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:59.865417  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:59.865486  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:59.914250  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:59.914273  452237 cri.go:89] found id: ""
	I0109 00:19:59.914283  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:59.914369  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.918360  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:59.918391  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:59.999676  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:59.999875  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.022457  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:20:00.022496  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:20:00.082902  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:20:00.082942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:20:00.127886  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:20:00.127933  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:20:00.168705  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:20:00.168737  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:20:00.554704  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:20:00.554751  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:20:00.604427  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:20:00.604462  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:20:00.618923  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:20:00.618954  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:20:00.747443  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:20:00.747475  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:20:00.802652  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:20:00.802691  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:20:00.849279  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:20:00.849318  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:20:00.887879  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:20:00.887919  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:20:00.951894  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.951928  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:20:00.951999  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:20:00.952011  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:20:00.952019  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.952030  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.952035  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:20:10.962675  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:20:10.962706  452237 system_pods.go:61] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.962711  452237 system_pods.go:61] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.962716  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.962721  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.962725  452237 system_pods.go:61] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.962729  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.962735  452237 system_pods.go:61] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.962740  452237 system_pods.go:61] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.962747  452237 system_pods.go:74] duration metric: took 11.442261888s to wait for pod list to return data ...
	I0109 00:20:10.962755  452237 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:20:10.965782  452237 default_sa.go:45] found service account: "default"
	I0109 00:20:10.965808  452237 default_sa.go:55] duration metric: took 3.046646ms for default service account to be created ...
	I0109 00:20:10.965817  452237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:20:10.972286  452237 system_pods.go:86] 8 kube-system pods found
	I0109 00:20:10.972323  452237 system_pods.go:89] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.972331  452237 system_pods.go:89] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.972340  452237 system_pods.go:89] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.972349  452237 system_pods.go:89] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.972356  452237 system_pods.go:89] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.972366  452237 system_pods.go:89] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.972381  452237 system_pods.go:89] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.972392  452237 system_pods.go:89] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.972406  452237 system_pods.go:126] duration metric: took 6.583119ms to wait for k8s-apps to be running ...
	I0109 00:20:10.972427  452237 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:20:10.972490  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:20:10.992310  452237 system_svc.go:56] duration metric: took 19.873367ms WaitForService to wait for kubelet.
	I0109 00:20:10.992340  452237 kubeadm.go:581] duration metric: took 4m38.919766965s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:20:10.992363  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:20:10.996337  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:20:10.996373  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:20:10.996390  452237 node_conditions.go:105] duration metric: took 4.019869ms to run NodePressure ...
	I0109 00:20:10.996405  452237 start.go:228] waiting for startup goroutines ...
	I0109 00:20:10.996414  452237 start.go:233] waiting for cluster config update ...
	I0109 00:20:10.996429  452237 start.go:242] writing updated cluster config ...
	I0109 00:20:10.996742  452237 ssh_runner.go:195] Run: rm -f paused
	I0109 00:20:11.052916  452237 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0109 00:20:11.055339  452237 out.go:177] * Done! kubectl is now configured to use "no-preload-378213" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:09:00 UTC, ends at Tue 2024-01-09 00:28:09 UTC. --
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.857762839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760088857596290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=28a27875-1639-4cc4-b147-fc062006258c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.859528504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9088e5a3-d3c7-44aa-af37-6b5ba221fbcc name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.859576141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9088e5a3-d3c7-44aa-af37-6b5ba221fbcc name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.859745869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9088e5a3-d3c7-44aa-af37-6b5ba221fbcc name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.906251623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0413b8fb-c96c-451a-b954-f561c646dcae name=/runtime.v1.RuntimeService/Version
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.906424201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0413b8fb-c96c-451a-b954-f561c646dcae name=/runtime.v1.RuntimeService/Version
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.907867869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=39b432c1-534d-4708-b964-19fe5c572e53 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.908324397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760088908309493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=39b432c1-534d-4708-b964-19fe5c572e53 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.909088416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d83ccca2-c52a-40d1-b967-181990537471 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.909142310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d83ccca2-c52a-40d1-b967-181990537471 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.909306931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d83ccca2-c52a-40d1-b967-181990537471 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.955263579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=abb7b467-bc3c-4f16-a945-402e5cdac50d name=/runtime.v1.RuntimeService/Version
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.955347340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=abb7b467-bc3c-4f16-a945-402e5cdac50d name=/runtime.v1.RuntimeService/Version
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.957216037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e1f9454c-0e77-48c2-a051-039c3676a031 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.958758110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760088957735934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e1f9454c-0e77-48c2-a051-039c3676a031 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.964299382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=975a430a-1ba1-419d-bb4e-e653929b13a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.964489690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=975a430a-1ba1-419d-bb4e-e653929b13a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:08 embed-certs-845373 crio[735]: time="2024-01-09 00:28:08.965656106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=975a430a-1ba1-419d-bb4e-e653929b13a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.023395692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1ab6f9a4-dd63-4335-be40-aa08ff06f542 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.023451908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1ab6f9a4-dd63-4335-be40-aa08ff06f542 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.024892109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f5d6b566-ca61-44c7-bde7-074164f5fa92 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.025310356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760089025296411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f5d6b566-ca61-44c7-bde7-074164f5fa92 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.026044521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f657e874-d3e5-4b6a-bdaa-722e8d90170c name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.026090863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f657e874-d3e5-4b6a-bdaa-722e8d90170c name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:28:09 embed-certs-845373 crio[735]: time="2024-01-09 00:28:09.026250155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f657e874-d3e5-4b6a-bdaa-722e8d90170c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc47842bcf90f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   ef8d1e250718b       storage-provisioner
	deabd24b79316       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   d23535b035417       coredns-5dd5756b68-j5mzp
	6004d919ad63c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   c4ba02b25054a       kube-proxy-nxtn2
	004d97d95671f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   1f55e027c6655       etcd-embed-certs-845373
	e1948c9408655       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   8c80a5a849a8c       kube-scheduler-embed-certs-845373
	3e878d8b2a29f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   75714a74543d6       kube-controller-manager-embed-certs-845373
	a465e638ed034       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   ba0eedb3e2b2e       kube-apiserver-embed-certs-845373
	
	
	==> coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-845373
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-845373
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=embed-certs-845373
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_14_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-845373
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:28:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:24:47 +0000   Tue, 09 Jan 2024 00:14:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:24:47 +0000   Tue, 09 Jan 2024 00:14:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:24:47 +0000   Tue, 09 Jan 2024 00:14:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:24:47 +0000   Tue, 09 Jan 2024 00:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.132
	  Hostname:    embed-certs-845373
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e89c5eae0b8446369000f9c55a1cbbc6
	  System UUID:                e89c5eae-0b84-4636-9000-f9c55a1cbbc6
	  Boot ID:                    f4410abe-81e8-47b6-8742-776c205ebec1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j5mzp                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-845373                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-845373             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-845373    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-nxtn2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-845373             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-zg66s               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node embed-certs-845373 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node embed-certs-845373 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node embed-certs-845373 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node embed-certs-845373 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node embed-certs-845373 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node embed-certs-845373 event: Registered Node embed-certs-845373 in Controller
	
	
	==> dmesg <==
	[Jan 9 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066835] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.369305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.477955] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139146] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan 9 00:09] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.866413] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.112627] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.152775] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.114459] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[  +0.229501] systemd-fstab-generator[719]: Ignoring "noauto" for root device
	[ +17.510670] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[ +20.055243] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 9 00:14] systemd-fstab-generator[3463]: Ignoring "noauto" for root device
	[  +9.300950] systemd-fstab-generator[3791]: Ignoring "noauto" for root device
	[ +14.155259] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] <==
	{"level":"info","ts":"2024-01-09T00:14:09.600671Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.132:2380"}
	{"level":"info","ts":"2024-01-09T00:14:09.600697Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.132:2380"}
	{"level":"info","ts":"2024-01-09T00:14:09.605707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 switched to configuration voters=(6269035391263149016)"}
	{"level":"info","ts":"2024-01-09T00:14:09.605862Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b8c14781592b9d32","local-member-id":"570016793c978bd8","added-peer-id":"570016793c978bd8","added-peer-peer-urls":["https://192.168.50.132:2380"]}
	{"level":"info","ts":"2024-01-09T00:14:10.1651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-09T00:14:10.165167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-09T00:14:10.165184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 received MsgPreVoteResp from 570016793c978bd8 at term 1"}
	{"level":"info","ts":"2024-01-09T00:14:10.165196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 became candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.165201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 received MsgVoteResp from 570016793c978bd8 at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.165209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 became leader at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.165216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 570016793c978bd8 elected leader 570016793c978bd8 at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.167079Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.168424Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"570016793c978bd8","local-member-attributes":"{Name:embed-certs-845373 ClientURLs:[https://192.168.50.132:2379]}","request-path":"/0/members/570016793c978bd8/attributes","cluster-id":"b8c14781592b9d32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:14:10.16912Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b8c14781592b9d32","local-member-id":"570016793c978bd8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.169241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.16929Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.169319Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:14:10.171101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.132:2379"}
	{"level":"info","ts":"2024-01-09T00:14:10.171176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:14:10.172288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:14:10.186687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:14:10.186759Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:24:10.22271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2024-01-09T00:24:10.226438Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":678,"took":"2.783081ms","hash":737016404}
	{"level":"info","ts":"2024-01-09T00:24:10.22654Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":737016404,"revision":678,"compact-revision":-1}
	
	
	==> kernel <==
	 00:28:09 up 19 min,  0 users,  load average: 0.60, 0.39, 0.24
	Linux embed-certs-845373 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] <==
	I0109 00:24:11.855856       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:24:12.856742       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:24:12.856882       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:24:12.856896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:24:12.857069       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:24:12.857114       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:24:12.858414       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:25:11.719817       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:25:12.857100       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:12.857267       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:25:12.857297       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:25:12.859608       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:12.859678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:25:12.859703       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:26:11.719865       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0109 00:27:11.719767       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:27:12.858088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:27:12.858299       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:27:12.858335       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:27:12.860589       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:27:12.860661       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:27:12.860695       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] <==
	I0109 00:22:27.423558       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:22:56.938622       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:22:57.432768       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:23:26.946227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:23:27.442563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:23:56.952425       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:23:57.451780       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:24:26.959024       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:24:27.460539       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:24:56.977638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:24:57.470303       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:25:26.405274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="257.35µs"
	E0109 00:25:26.984198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:27.481046       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:25:37.407430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="187.048µs"
	E0109 00:25:56.991511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:57.491705       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:26.998719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:27.501239       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:57.005233       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:57.510819       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:27.011447       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:27.520059       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:57.018029       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:57.529620       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] <==
	I0109 00:14:30.706197       1 server_others.go:69] "Using iptables proxy"
	I0109 00:14:30.734014       1 node.go:141] Successfully retrieved node IP: 192.168.50.132
	I0109 00:14:31.356439       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:14:31.356546       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:14:31.392038       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:14:31.393727       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:14:31.394214       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:14:31.394254       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:14:31.397626       1 config.go:188] "Starting service config controller"
	I0109 00:14:31.398308       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:14:31.398368       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:14:31.398387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:14:31.403512       1 config.go:315] "Starting node config controller"
	I0109 00:14:31.403556       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:14:31.498756       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:14:31.509378       1 shared_informer.go:318] Caches are synced for node config
	I0109 00:14:31.509620       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] <==
	W0109 00:14:11.862144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:11.862623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:14:12.813477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:14:12.813525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0109 00:14:12.838106       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:14:12.838170       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:14:12.902299       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:14:12.902354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:14:12.911532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:12.911602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:14:12.922147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:14:12.922203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:14:12.968179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:12.968232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:14:13.004772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:14:13.004983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:14:13.036405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:14:13.036460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:14:13.063161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:14:13.063186       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:14:13.081880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:14:13.081989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:14:13.088989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:14:13.089015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0109 00:14:16.042559       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:09:00 UTC, ends at Tue 2024-01-09 00:28:09 UTC. --
	Jan 09 00:25:15 embed-certs-845373 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:25:15 embed-certs-845373 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:25:15 embed-certs-845373 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:25:26 embed-certs-845373 kubelet[3798]: E0109 00:25:26.384964    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:25:37 embed-certs-845373 kubelet[3798]: E0109 00:25:37.385856    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:25:50 embed-certs-845373 kubelet[3798]: E0109 00:25:50.384294    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:26:05 embed-certs-845373 kubelet[3798]: E0109 00:26:05.386282    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:26:15 embed-certs-845373 kubelet[3798]: E0109 00:26:15.483618    3798 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:26:15 embed-certs-845373 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:26:15 embed-certs-845373 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:26:15 embed-certs-845373 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:26:16 embed-certs-845373 kubelet[3798]: E0109 00:26:16.385738    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:26:27 embed-certs-845373 kubelet[3798]: E0109 00:26:27.385096    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:26:38 embed-certs-845373 kubelet[3798]: E0109 00:26:38.384170    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:26:51 embed-certs-845373 kubelet[3798]: E0109 00:26:51.385461    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:27:02 embed-certs-845373 kubelet[3798]: E0109 00:27:02.384846    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:27:15 embed-certs-845373 kubelet[3798]: E0109 00:27:15.385539    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:27:15 embed-certs-845373 kubelet[3798]: E0109 00:27:15.479436    3798 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:27:15 embed-certs-845373 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:27:15 embed-certs-845373 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:27:15 embed-certs-845373 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:27:27 embed-certs-845373 kubelet[3798]: E0109 00:27:27.386321    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:27:38 embed-certs-845373 kubelet[3798]: E0109 00:27:38.384631    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:27:53 embed-certs-845373 kubelet[3798]: E0109 00:27:53.387008    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:28:05 embed-certs-845373 kubelet[3798]: E0109 00:28:05.386059    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	
	
	==> storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] <==
	I0109 00:14:31.860803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:14:31.884016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:14:31.884384       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:14:31.897423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:14:31.898536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-845373_0b5f8b1d-4c1e-4143-94eb-a87e1023c69c!
	I0109 00:14:31.897868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73937ad1-88d2-476e-aac5-99db1703d35c", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-845373_0b5f8b1d-4c1e-4143-94eb-a87e1023c69c became leader
	I0109 00:14:32.000199       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-845373_0b5f8b1d-4c1e-4143-94eb-a87e1023c69c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845373 -n embed-certs-845373
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-845373 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-zg66s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-845373 describe pod metrics-server-57f55c9bc5-zg66s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-845373 describe pod metrics-server-57f55c9bc5-zg66s: exit status 1 (78.33189ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-zg66s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-845373 describe pod metrics-server-57f55c9bc5-zg66s: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:20:28.030734  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:20:42.676420  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:20:45.372581  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:20:49.610919  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0109 00:21:03.160174  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:21:13.676962  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0109 00:21:28.294803  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:21:37.766134  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:21:51.073779  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:22:20.222837  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:23:36.012573  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-378213 -n no-preload-378213
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:29:11.699776966 +0000 UTC m=+5850.576727184
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-378213 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-378213 logs -n 25: (1.331953782s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	| start   | -p newest-cni-745275 --memory=2200 --alsologtostderr   | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:29:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:29:11.688732  457766 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:29:11.688871  457766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:29:11.688878  457766 out.go:309] Setting ErrFile to fd 2...
	I0109 00:29:11.688885  457766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:29:11.689175  457766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:29:11.689819  457766 out.go:303] Setting JSON to false
	I0109 00:29:11.690900  457766 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18678,"bootTime":1704741474,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:29:11.690970  457766 start.go:138] virtualization: kvm guest
	I0109 00:29:11.693663  457766 out.go:177] * [newest-cni-745275] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:29:11.695220  457766 notify.go:220] Checking for updates...
	I0109 00:29:11.696511  457766 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:29:11.697854  457766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:29:11.699113  457766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:29:11.700713  457766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:11.702142  457766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:29:11.703441  457766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:29:11.705450  457766 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:29:11.705599  457766 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:29:11.705733  457766 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:29:11.705960  457766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:29:11.753468  457766 out.go:177] * Using the kvm2 driver based on user configuration
	I0109 00:29:11.755037  457766 start.go:298] selected driver: kvm2
	I0109 00:29:11.755057  457766 start.go:902] validating driver "kvm2" against <nil>
	I0109 00:29:11.755077  457766 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:29:11.755971  457766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:29:11.756044  457766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:29:11.773794  457766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:29:11.773854  457766 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0109 00:29:11.773927  457766 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0109 00:29:11.776728  457766 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0109 00:29:11.776819  457766 cni.go:84] Creating CNI manager for ""
	I0109 00:29:11.776837  457766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:29:11.776868  457766 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0109 00:29:11.776889  457766 start_flags.go:323] config:
	{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:29:11.777170  457766 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:29:11.779020  457766 out.go:177] * Starting control plane node newest-cni-745275 in cluster newest-cni-745275
	I0109 00:29:11.780422  457766 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:29:11.780468  457766 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0109 00:29:11.780477  457766 cache.go:56] Caching tarball of preloaded images
	I0109 00:29:11.780546  457766 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:29:11.780557  457766 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0109 00:29:11.780646  457766 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:29:11.780691  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json: {Name:mk4d641c387ca3ed27cddd141100c40e37d72082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:11.780835  457766 start.go:365] acquiring machines lock for newest-cni-745275: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:29:11.780874  457766 start.go:369] acquired machines lock for "newest-cni-745275" in 24.81µs
	I0109 00:29:11.780899  457766 start.go:93] Provisioning new machine with config: &{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:29:11.780969  457766 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:09:20 UTC, ends at Tue 2024-01-09 00:29:12 UTC. --
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.516154746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ffb2552d-4694-4ac4-bb69-c4acf1809d1b name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.517198497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4e69e010-b2c5-47f9-8aa2-63e8723acf3b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.517541087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760152517528524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=4e69e010-b2c5-47f9-8aa2-63e8723acf3b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.518097837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0d1ee967-898e-4742-b045-29840a2de78f name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.518181394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0d1ee967-898e-4742-b045-29840a2de78f name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.518357607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0d1ee967-898e-4742-b045-29840a2de78f name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.539552970Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=023d41e6-c3fa-4a21-9d4a-915cde782a78 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.539828597Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ztvgr,Uid:9dca02e6-8b8c-491f-a689-fb9b51c5f88e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759333620477510,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:15:32.388523399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:95fe5038-977e-430a-8bda-42557c536114,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759333270384945,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\"
:{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-09T00:15:32.910157936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94d59f6c5fcaa87f9a6e25a20d3c724edf8731a2b683e0019b4d2e9861cc258a,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-k426v,Uid:ccc02dbd-f70f-46d3-b39d-0fef97bfa04e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759333253516346,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-k426v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccc02dbd-f70f-46d3-b39d-0fef97bfa04e,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:15:32.904474906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&PodSandboxMetadata{Name:kube-proxy-4vnf5,Uid:1a87e8a6-55b5-4579-aa4e-1a2
0be126ba2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759332351116008,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:15:31.715199671Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-378213,Uid:9cfef17d11830a8ed29b7b05a894b9a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759310526846385,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cfef17d11830a8ed29b7b05a894
b9a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.62:8443,kubernetes.io/config.hash: 9cfef17d11830a8ed29b7b05a894b9a9,kubernetes.io/config.seen: 2024-01-09T00:15:10.010135343Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-378213,Uid:a10b7db81804221180b16bf73df17840,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759310516201303,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a10b7db81804221180b16bf73df17840,kubernetes.io/config.seen: 2024-01-09T00:15:10.010138048Z,kubernetes.io/config.source: file,
},RuntimeHandler:,},&PodSandbox{Id:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-378213,Uid:52dd6fed1fd30892e205b9a6becc8177,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759310511624209,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 52dd6fed1fd30892e205b9a6becc8177,kubernetes.io/config.seen: 2024-01-09T00:15:10.010136632Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-378213,Uid:bb34a70bb18e99dfe7af59f87c242f79,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17047593
10500173888,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.62:2379,kubernetes.io/config.hash: bb34a70bb18e99dfe7af59f87c242f79,kubernetes.io/config.seen: 2024-01-09T00:15:10.010130861Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=023d41e6-c3fa-4a21-9d4a-915cde782a78 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.540740617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd464c1c-df97-48ff-b883-0dc0f88e4434 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.540815875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd464c1c-df97-48ff-b883-0dc0f88e4434 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.541056459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd464c1c-df97-48ff-b883-0dc0f88e4434 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.563330429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ee86e802-8882-4caf-b487-bcf7a614febe name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.563407996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ee86e802-8882-4caf-b487-bcf7a614febe name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.565642825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8fd29b5e-3127-4158-a18b-7291b155610d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.566192683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760152566170043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=8fd29b5e-3127-4158-a18b-7291b155610d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.567839897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c2591eb1-c6ac-4c1a-bfe3-9399aec8e739 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.567885907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c2591eb1-c6ac-4c1a-bfe3-9399aec8e739 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.568103284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c2591eb1-c6ac-4c1a-bfe3-9399aec8e739 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.610163554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c75585e9-b5f9-474f-a070-a96bffdf6cbb name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.610221571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c75585e9-b5f9-474f-a070-a96bffdf6cbb name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.612036258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d447d63c-2d3d-4fb1-ab24-a6c7238e3eae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.612367413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760152612353305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d447d63c-2d3d-4fb1-ab24-a6c7238e3eae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.612872580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b96deaac-f837-4635-a8a0-4a57dc37276a name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.612918728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b96deaac-f837-4635-a8a0-4a57dc37276a name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:12 no-preload-378213 crio[712]: time="2024-01-09 00:29:12.613180274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b96deaac-f837-4635-a8a0-4a57dc37276a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9ddb767a3680b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   28e8d7228f95c       storage-provisioner
	16e8e419faf28       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   4021cd157f894       coredns-76f75df574-ztvgr
	577d39068d7c0       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   c21d689adefa6       kube-proxy-4vnf5
	31914c8452b6b       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   d9ded595452b7       kube-apiserver-no-preload-378213
	3f150bb39755e       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   7ca36c7ddc5b7       etcd-no-preload-378213
	6657ae7032ad4       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   72a56456d833b       kube-scheduler-no-preload-378213
	315a6bb636ced       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   bc6ea128fce71       kube-controller-manager-no-preload-378213
	
	
	==> coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47191 - 39189 "HINFO IN 5390086558276774289.492511922595418031. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.04397005s
	
	
	==> describe nodes <==
	Name:               no-preload-378213
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-378213
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=no-preload-378213
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_15_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:15:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-378213
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:29:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.62
	  Hostname:    no-preload-378213
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c599400ad65a4458b3dd9b13cea40b29
	  System UUID:                c599400a-d65a-4458-b3dd-9b13cea40b29
	  Boot ID:                    8a539832-187e-4228-8d3f-4c857670d960
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ztvgr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-378213                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-378213             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-378213    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-4vnf5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-378213             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-k426v              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-378213 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-378213 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-378213 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node no-preload-378213 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node no-preload-378213 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-378213 event: Registered Node no-preload-378213 in Controller
	
	
	==> dmesg <==
	[Jan 9 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070929] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.599910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.462871] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150587] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.440390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.241046] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.122966] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.154374] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.121895] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.270134] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +30.110927] systemd-fstab-generator[1320]: Ignoring "noauto" for root device
	[Jan 9 00:10] kauditd_printk_skb: 5 callbacks suppressed
	[ +27.383710] kauditd_printk_skb: 14 callbacks suppressed
	[Jan 9 00:15] systemd-fstab-generator[3974]: Ignoring "noauto" for root device
	[  +9.804596] systemd-fstab-generator[4306]: Ignoring "noauto" for root device
	[ +14.220585] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] <==
	{"level":"info","ts":"2024-01-09T00:15:13.352724Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.62:2380"}
	{"level":"info","ts":"2024-01-09T00:15:13.353442Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.62:2380"}
	{"level":"info","ts":"2024-01-09T00:15:13.353305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc switched to configuration voters=(17594045746974101212)"}
	{"level":"info","ts":"2024-01-09T00:15:13.353604Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f37c55d4fac412f","local-member-id":"f42a9b63be5d0edc","added-peer-id":"f42a9b63be5d0edc","added-peer-peer-urls":["https://192.168.61.62:2380"]}
	{"level":"info","ts":"2024-01-09T00:15:14.283337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-09T00:15:14.283453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-09T00:15:14.283489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc received MsgPreVoteResp from f42a9b63be5d0edc at term 1"}
	{"level":"info","ts":"2024-01-09T00:15:14.283519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc became candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.283543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc received MsgVoteResp from f42a9b63be5d0edc at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.283584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc became leader at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.283611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f42a9b63be5d0edc elected leader f42a9b63be5d0edc at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.285438Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f42a9b63be5d0edc","local-member-attributes":"{Name:no-preload-378213 ClientURLs:[https://192.168.61.62:2379]}","request-path":"/0/members/f42a9b63be5d0edc/attributes","cluster-id":"2f37c55d4fac412f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:15:14.285521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:15:14.285915Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:15:14.286098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:15:14.285548Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.285572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:15:14.287763Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f37c55d4fac412f","local-member-id":"f42a9b63be5d0edc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.287853Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.287898Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.289507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.62:2379"}
	{"level":"info","ts":"2024-01-09T00:15:14.290182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:25:14.334229Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-01-09T00:25:14.337654Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":683,"took":"2.964518ms","hash":373110865}
	{"level":"info","ts":"2024-01-09T00:25:14.337777Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":373110865,"revision":683,"compact-revision":-1}
	
	
	==> kernel <==
	 00:29:12 up 20 min,  0 users,  load average: 0.36, 0.25, 0.27
	Linux no-preload-378213 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] <==
	I0109 00:23:16.823408       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:25:15.824820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:15.824940       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0109 00:25:16.825442       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:16.825585       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:25:16.825623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:25:16.825922       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:16.826237       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:25:16.827490       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:26:16.826540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:26:16.826636       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:26:16.826649       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:26:16.827743       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:26:16.827885       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:26:16.828177       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:28:16.827415       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:28:16.827734       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:28:16.827769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:28:16.828717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:28:16.828852       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:28:16.828886       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] <==
	I0109 00:23:31.582438       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:24:01.079525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:24:01.591847       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:24:31.087073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:24:31.600655       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:25:01.094182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:01.611580       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:25:31.100599       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:31.621748       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:01.106816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:01.631536       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:31.113764       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:31.641662       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:26:35.557495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="308.125µs"
	I0109 00:26:47.559513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="166.541µs"
	E0109 00:27:01.119660       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:01.651865       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:31.125653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:31.671651       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:01.131594       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:01.679826       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:31.141353       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:31.688811       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:01.148114       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:01.697837       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] <==
	I0109 00:15:34.651765       1 server_others.go:72] "Using iptables proxy"
	I0109 00:15:34.797170       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.62"]
	I0109 00:15:34.905437       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0109 00:15:34.907791       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:15:34.907920       1 server_others.go:168] "Using iptables Proxier"
	I0109 00:15:34.912224       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:15:34.912522       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0109 00:15:34.912566       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:15:34.914322       1 config.go:188] "Starting service config controller"
	I0109 00:15:34.914338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:15:34.914351       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:15:34.914354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:15:34.914678       1 config.go:315] "Starting node config controller"
	I0109 00:15:34.914684       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:15:35.015884       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:15:35.016935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:15:35.019546       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] <==
	W0109 00:15:16.812633       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:15:16.812920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0109 00:15:16.857613       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:15:16.857737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:15:16.924300       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:16.924370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0109 00:15:16.935062       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:15:16.935137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:15:16.972696       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:16.973072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:15:16.992525       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:16.992601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:15:17.021089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:15:17.021208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:15:17.032400       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:15:17.032528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:15:17.143862       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:15:17.143927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:15:17.208264       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:15:17.208409       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:15:17.260434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:15:17.260529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:15:17.269601       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:15:17.269699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0109 00:15:20.078014       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:09:20 UTC, ends at Tue 2024-01-09 00:29:13 UTC. --
	Jan 09 00:26:19 no-preload-378213 kubelet[4312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:26:24 no-preload-378213 kubelet[4312]: E0109 00:26:24.578244    4312 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 09 00:26:24 no-preload-378213 kubelet[4312]: E0109 00:26:24.578321    4312 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 09 00:26:24 no-preload-378213 kubelet[4312]: E0109 00:26:24.578530    4312 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5mthg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-k426v_kube-system(ccc02dbd-f70f-46d3-b39d-0fef97bfa04e): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:26:24 no-preload-378213 kubelet[4312]: E0109 00:26:24.578574    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:26:35 no-preload-378213 kubelet[4312]: E0109 00:26:35.538899    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:26:47 no-preload-378213 kubelet[4312]: E0109 00:26:47.539369    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:00 no-preload-378213 kubelet[4312]: E0109 00:27:00.538391    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:13 no-preload-378213 kubelet[4312]: E0109 00:27:13.538647    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]: E0109 00:27:19.651662    4312 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:27:24 no-preload-378213 kubelet[4312]: E0109 00:27:24.537565    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:36 no-preload-378213 kubelet[4312]: E0109 00:27:36.538735    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:51 no-preload-378213 kubelet[4312]: E0109 00:27:51.538662    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:06 no-preload-378213 kubelet[4312]: E0109 00:28:06.537683    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:17 no-preload-378213 kubelet[4312]: E0109 00:28:17.539496    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]: E0109 00:28:19.649289    4312 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:28:32 no-preload-378213 kubelet[4312]: E0109 00:28:32.538321    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:47 no-preload-378213 kubelet[4312]: E0109 00:28:47.538891    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:29:02 no-preload-378213 kubelet[4312]: E0109 00:29:02.538274    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	
	
	==> storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] <==
	I0109 00:15:35.046880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:15:35.075030       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:15:35.075229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:15:35.094183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:15:35.095343       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-378213_76389a63-633e-4ad4-abf0-2f04a23cd7d6!
	I0109 00:15:35.097350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b6d1faa-1894-47b9-8272-4983df82590d", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-378213_76389a63-633e-4ad4-abf0-2f04a23cd7d6 became leader
	I0109 00:15:35.196401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-378213_76389a63-633e-4ad4-abf0-2f04a23cd7d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-378213 -n no-preload-378213
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-378213 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-k426v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-378213 describe pod metrics-server-57f55c9bc5-k426v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-378213 describe pod metrics-server-57f55c9bc5-k426v: exit status 1 (83.792267ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-k426v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-378213 describe pod metrics-server-57f55c9bc5-k426v: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (435.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:24:19.627576  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:24:22.327728  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:24:40.117001  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:25:28.030559  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:25:49.610548  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0109 00:26:13.676210  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:31:00.790843869 +0000 UTC m=+5959.667794088
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-834116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.835µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-834116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-834116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-834116 logs -n 25: (1.245618814s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	| start   | -p newest-cni-745275 --memory=2200 --alsologtostderr   | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	| addons  | enable metrics-server -p newest-cni-745275             | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC | 09 Jan 24 00:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-745275                                   | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC | 09 Jan 24 00:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC | 09 Jan 24 00:30 UTC |
	| addons  | enable dashboard -p newest-cni-745275                  | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC | 09 Jan 24 00:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-745275 --memory=2200 --alsologtostderr   | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:30:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:30:30.385102  458793 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:30:30.385369  458793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:30:30.385378  458793 out.go:309] Setting ErrFile to fd 2...
	I0109 00:30:30.385383  458793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:30:30.385622  458793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:30:30.386238  458793 out.go:303] Setting JSON to false
	I0109 00:30:30.387293  458793 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18756,"bootTime":1704741474,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:30:30.387376  458793 start.go:138] virtualization: kvm guest
	I0109 00:30:30.389868  458793 out.go:177] * [newest-cni-745275] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:30:30.391275  458793 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:30:30.392597  458793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:30:30.391280  458793 notify.go:220] Checking for updates...
	I0109 00:30:30.395097  458793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:30:30.396446  458793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:30:30.397835  458793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:30:30.399373  458793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:30:30.401409  458793 config.go:182] Loaded profile config "newest-cni-745275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:30:30.402084  458793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:30.402139  458793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:30.416414  458793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41933
	I0109 00:30:30.416862  458793 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:30.417405  458793 main.go:141] libmachine: Using API Version  1
	I0109 00:30:30.417423  458793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:30.417767  458793 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:30.417942  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:30.418153  458793 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:30:30.418442  458793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:30.418487  458793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:30.433510  458793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0109 00:30:30.433956  458793 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:30.434450  458793 main.go:141] libmachine: Using API Version  1
	I0109 00:30:30.434476  458793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:30.434806  458793 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:30.435022  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:30.470824  458793 out.go:177] * Using the kvm2 driver based on existing profile
	I0109 00:30:30.472272  458793 start.go:298] selected driver: kvm2
	I0109 00:30:30.472286  458793 start.go:902] validating driver "kvm2" against &{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:30:30.472434  458793 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:30:30.473092  458793 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:30:30.473176  458793 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:30:30.487872  458793 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:30:30.488311  458793 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0109 00:30:30.488386  458793 cni.go:84] Creating CNI manager for ""
	I0109 00:30:30.488399  458793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:30:30.488410  458793 start_flags.go:323] config:
	{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:30:30.488618  458793 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:30:30.490606  458793 out.go:177] * Starting control plane node newest-cni-745275 in cluster newest-cni-745275
	I0109 00:30:30.491884  458793 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:30:30.491921  458793 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0109 00:30:30.491935  458793 cache.go:56] Caching tarball of preloaded images
	I0109 00:30:30.492021  458793 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:30:30.492035  458793 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0109 00:30:30.492191  458793 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:30:30.492400  458793 start.go:365] acquiring machines lock for newest-cni-745275: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:30:30.492445  458793 start.go:369] acquired machines lock for "newest-cni-745275" in 25.016µs
	I0109 00:30:30.492465  458793 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:30:30.492472  458793 fix.go:54] fixHost starting: 
	I0109 00:30:30.492774  458793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:30.492813  458793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:30.506855  458793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I0109 00:30:30.507320  458793 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:30.507824  458793 main.go:141] libmachine: Using API Version  1
	I0109 00:30:30.507849  458793 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:30.508239  458793 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:30.508450  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:30.508606  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetState
	I0109 00:30:30.510393  458793 fix.go:102] recreateIfNeeded on newest-cni-745275: state=Stopped err=<nil>
	I0109 00:30:30.510415  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	W0109 00:30:30.510582  458793 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:30:30.512743  458793 out.go:177] * Restarting existing kvm2 VM for "newest-cni-745275" ...
	I0109 00:30:30.514231  458793 main.go:141] libmachine: (newest-cni-745275) Calling .Start
	I0109 00:30:30.514404  458793 main.go:141] libmachine: (newest-cni-745275) Ensuring networks are active...
	I0109 00:30:30.515124  458793 main.go:141] libmachine: (newest-cni-745275) Ensuring network default is active
	I0109 00:30:30.515481  458793 main.go:141] libmachine: (newest-cni-745275) Ensuring network mk-newest-cni-745275 is active
	I0109 00:30:30.515839  458793 main.go:141] libmachine: (newest-cni-745275) Getting domain xml...
	I0109 00:30:30.516742  458793 main.go:141] libmachine: (newest-cni-745275) Creating domain...
	I0109 00:30:31.797242  458793 main.go:141] libmachine: (newest-cni-745275) Waiting to get IP...
	I0109 00:30:31.798288  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:31.798682  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:31.798740  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:31.798650  458828 retry.go:31] will retry after 222.907461ms: waiting for machine to come up
	I0109 00:30:32.023183  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:32.023656  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:32.023691  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:32.023598  458828 retry.go:31] will retry after 358.898289ms: waiting for machine to come up
	I0109 00:30:32.384176  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:32.384751  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:32.384787  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:32.384707  458828 retry.go:31] will retry after 325.346423ms: waiting for machine to come up
	I0109 00:30:32.711307  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:32.711732  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:32.711761  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:32.711686  458828 retry.go:31] will retry after 544.195371ms: waiting for machine to come up
	I0109 00:30:33.257440  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:33.257970  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:33.257999  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:33.257901  458828 retry.go:31] will retry after 555.212799ms: waiting for machine to come up
	I0109 00:30:33.814655  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:33.815147  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:33.815190  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:33.815077  458828 retry.go:31] will retry after 812.162762ms: waiting for machine to come up
	I0109 00:30:34.628665  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:34.629048  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:34.629079  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:34.628987  458828 retry.go:31] will retry after 974.065453ms: waiting for machine to come up
	I0109 00:30:35.605166  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:35.605703  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:35.605733  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:35.605648  458828 retry.go:31] will retry after 919.192029ms: waiting for machine to come up
	I0109 00:30:36.526931  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:36.527370  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:36.527403  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:36.527291  458828 retry.go:31] will retry after 1.642950926s: waiting for machine to come up
	I0109 00:30:38.172066  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:38.172531  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:38.172556  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:38.172482  458828 retry.go:31] will retry after 1.696546645s: waiting for machine to come up
	I0109 00:30:39.870782  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:39.871333  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:39.871390  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:39.871268  458828 retry.go:31] will retry after 2.88361546s: waiting for machine to come up
	I0109 00:30:42.756573  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:42.757042  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:42.757080  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:42.756970  458828 retry.go:31] will retry after 2.330155857s: waiting for machine to come up
	I0109 00:30:45.090093  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:45.090485  458793 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:30:45.090508  458793 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:30:45.090442  458828 retry.go:31] will retry after 2.850469816s: waiting for machine to come up
	I0109 00:30:47.944080  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:47.944664  458793 main.go:141] libmachine: (newest-cni-745275) Found IP for machine: 192.168.72.107
	I0109 00:30:47.944803  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has current primary IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:47.944855  458793 main.go:141] libmachine: (newest-cni-745275) Reserving static IP address...
	I0109 00:30:47.945015  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "newest-cni-745275", mac: "52:54:00:41:55:15", ip: "192.168.72.107"} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:47.945042  458793 main.go:141] libmachine: (newest-cni-745275) Reserved static IP address: 192.168.72.107
	I0109 00:30:47.945079  458793 main.go:141] libmachine: (newest-cni-745275) DBG | skip adding static IP to network mk-newest-cni-745275 - found existing host DHCP lease matching {name: "newest-cni-745275", mac: "52:54:00:41:55:15", ip: "192.168.72.107"}
	I0109 00:30:47.945097  458793 main.go:141] libmachine: (newest-cni-745275) DBG | Getting to WaitForSSH function...
	I0109 00:30:47.945113  458793 main.go:141] libmachine: (newest-cni-745275) Waiting for SSH to be available...
	I0109 00:30:47.947092  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:47.947406  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:47.947443  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:47.947558  458793 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH client type: external
	I0109 00:30:47.947587  458793 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa (-rw-------)
	I0109 00:30:47.947609  458793 main.go:141] libmachine: (newest-cni-745275) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:30:47.947622  458793 main.go:141] libmachine: (newest-cni-745275) DBG | About to run SSH command:
	I0109 00:30:47.947638  458793 main.go:141] libmachine: (newest-cni-745275) DBG | exit 0
	I0109 00:30:48.038923  458793 main.go:141] libmachine: (newest-cni-745275) DBG | SSH cmd err, output: <nil>: 
	I0109 00:30:48.039341  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:30:48.040091  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:30:48.042620  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.042964  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.042988  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.043220  458793 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:30:48.043438  458793 machine.go:88] provisioning docker machine ...
	I0109 00:30:48.043463  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:48.043695  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:30:48.043899  458793 buildroot.go:166] provisioning hostname "newest-cni-745275"
	I0109 00:30:48.043933  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:30:48.044085  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:48.046615  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.046965  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.046988  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.047129  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:48.047291  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.047468  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.047627  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:48.047942  458793 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:48.048284  458793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:30:48.048298  458793 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-745275 && echo "newest-cni-745275" | sudo tee /etc/hostname
	I0109 00:30:48.177422  458793 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-745275
	
	I0109 00:30:48.177457  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:48.180486  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.180798  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.180824  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.181010  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:48.181231  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.181439  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.181587  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:48.181767  458793 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:48.182083  458793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:30:48.182099  458793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-745275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-745275/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-745275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:30:48.303425  458793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:30:48.303459  458793 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:30:48.303488  458793 buildroot.go:174] setting up certificates
	I0109 00:30:48.303501  458793 provision.go:83] configureAuth start
	I0109 00:30:48.303517  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:30:48.303788  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:30:48.306487  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.306830  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.306864  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.307023  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:48.309297  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.309667  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.309706  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.309819  458793 provision.go:138] copyHostCerts
	I0109 00:30:48.309891  458793 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:30:48.309902  458793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:30:48.309963  458793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:30:48.310057  458793 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:30:48.310068  458793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:30:48.310095  458793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:30:48.310145  458793 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:30:48.310152  458793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:30:48.310171  458793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:30:48.310211  458793 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.newest-cni-745275 san=[192.168.72.107 192.168.72.107 localhost 127.0.0.1 minikube newest-cni-745275]
	I0109 00:30:48.442189  458793 provision.go:172] copyRemoteCerts
	I0109 00:30:48.442255  458793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:30:48.442281  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:48.445135  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.445531  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.445570  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.445755  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:48.445985  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.446212  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:48.446366  458793 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:30:48.533413  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:30:48.555760  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:30:48.578139  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:30:48.599548  458793 provision.go:86] duration metric: configureAuth took 296.029014ms
	I0109 00:30:48.599586  458793 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:30:48.599810  458793 config.go:182] Loaded profile config "newest-cni-745275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:30:48.599909  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:48.602665  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.603035  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.603063  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.603260  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:48.603518  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.603692  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.603881  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:48.604089  458793 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:48.604449  458793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:30:48.604474  458793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:30:48.903732  458793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:30:48.903760  458793 machine.go:91] provisioned docker machine in 860.309402ms
	I0109 00:30:48.903772  458793 start.go:300] post-start starting for "newest-cni-745275" (driver="kvm2")
	I0109 00:30:48.903786  458793 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:30:48.903810  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:48.904169  458793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:30:48.904212  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:48.906790  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.907127  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:48.907159  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:48.907266  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:48.907521  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:48.907674  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:48.907801  458793 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:30:48.993445  458793 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:30:48.997465  458793 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:30:48.997493  458793 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:30:48.997562  458793 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:30:48.997659  458793 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:30:48.997777  458793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:30:49.005716  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:30:49.027994  458793 start.go:303] post-start completed in 124.206862ms
	I0109 00:30:49.028026  458793 fix.go:56] fixHost completed within 18.535553651s
	I0109 00:30:49.028055  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:49.030857  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.031229  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:49.031253  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.031476  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:49.031724  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:49.031895  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:49.032085  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:49.032274  458793 main.go:141] libmachine: Using SSH client type: native
	I0109 00:30:49.032607  458793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:30:49.032619  458793 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:30:49.144102  458793 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760249.092092100
	
	I0109 00:30:49.144125  458793 fix.go:206] guest clock: 1704760249.092092100
	I0109 00:30:49.144132  458793 fix.go:219] Guest: 2024-01-09 00:30:49.0920921 +0000 UTC Remote: 2024-01-09 00:30:49.028031309 +0000 UTC m=+18.692650553 (delta=64.060791ms)
	I0109 00:30:49.144151  458793 fix.go:190] guest clock delta is within tolerance: 64.060791ms
	I0109 00:30:49.144157  458793 start.go:83] releasing machines lock for "newest-cni-745275", held for 18.651699388s
	I0109 00:30:49.144182  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:49.144455  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:30:49.147436  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.147868  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:49.147896  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.148035  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:49.148590  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:49.148772  458793 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:49.148879  458793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:30:49.148947  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:49.148977  458793 ssh_runner.go:195] Run: cat /version.json
	I0109 00:30:49.149004  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:49.151586  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.151623  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.152019  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:49.152060  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.152089  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:49.152107  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:49.152219  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:49.152360  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:49.152448  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:49.152531  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:49.152650  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:49.152655  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:49.152823  458793 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:30:49.152828  458793 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:30:49.257728  458793 ssh_runner.go:195] Run: systemctl --version
	I0109 00:30:49.263883  458793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:30:49.407846  458793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:30:49.413972  458793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:30:49.414064  458793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:30:49.431210  458793 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:30:49.431236  458793 start.go:475] detecting cgroup driver to use...
	I0109 00:30:49.431320  458793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:30:49.446058  458793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:30:49.459441  458793 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:30:49.459508  458793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:30:49.474771  458793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:30:49.488764  458793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:30:49.588942  458793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:30:49.708036  458793 docker.go:219] disabling docker service ...
	I0109 00:30:49.708143  458793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:30:49.722676  458793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:30:49.734968  458793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:30:49.839230  458793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:30:49.956596  458793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:30:49.969621  458793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:30:49.986390  458793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:30:49.986459  458793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:30:49.995312  458793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:30:49.995391  458793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:30:50.004153  458793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:30:50.012755  458793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:30:50.021454  458793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:30:50.030516  458793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:30:50.038199  458793 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:30:50.038262  458793 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:30:50.051222  458793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:30:50.059845  458793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:30:50.169398  458793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:30:50.335308  458793 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:30:50.335407  458793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:30:50.340468  458793 start.go:543] Will wait 60s for crictl version
	I0109 00:30:50.340534  458793 ssh_runner.go:195] Run: which crictl
	I0109 00:30:50.344333  458793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:30:50.380738  458793 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:30:50.380829  458793 ssh_runner.go:195] Run: crio --version
	I0109 00:30:50.429205  458793 ssh_runner.go:195] Run: crio --version
	I0109 00:30:50.480572  458793 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:30:50.482106  458793 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:30:50.484798  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:50.485132  458793 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:30:42 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:50.485157  458793 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:50.485340  458793 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:30:50.489661  458793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:30:50.504408  458793 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0109 00:30:50.505679  458793 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:30:50.505763  458793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:30:50.554302  458793 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:30:50.554385  458793 ssh_runner.go:195] Run: which lz4
	I0109 00:30:50.558304  458793 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:30:50.562339  458793 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:30:50.562371  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0109 00:30:52.088606  458793 crio.go:444] Took 1.530332 seconds to copy over tarball
	I0109 00:30:52.088693  458793 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:30:54.862329  458793 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.773602844s)
	I0109 00:30:54.862380  458793 crio.go:451] Took 2.773750 seconds to extract the tarball
	I0109 00:30:54.862391  458793 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:30:54.899491  458793 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:30:54.949726  458793 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:30:54.949755  458793 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:30:54.949828  458793 ssh_runner.go:195] Run: crio config
	I0109 00:30:55.007768  458793 cni.go:84] Creating CNI manager for ""
	I0109 00:30:55.007793  458793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:30:55.007813  458793 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0109 00:30:55.007831  458793 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-745275 NodeName:newest-cni-745275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:30:55.007988  458793 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-745275"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:30:55.008148  458793 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-745275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:30:55.008229  458793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:30:55.016656  458793 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:30:55.016733  458793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:30:55.024459  458793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0109 00:30:55.039869  458793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:30:55.054985  458793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0109 00:30:55.070713  458793 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0109 00:30:55.074135  458793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:30:55.085800  458793 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275 for IP: 192.168.72.107
	I0109 00:30:55.085828  458793 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:30:55.085970  458793 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:30:55.086023  458793 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:30:55.086116  458793 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.key
	I0109 00:30:55.086171  458793 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713
	I0109 00:30:55.086213  458793 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key
	I0109 00:30:55.086310  458793 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:30:55.086339  458793 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:30:55.086349  458793 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:30:55.086376  458793 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:30:55.086405  458793 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:30:55.086426  458793 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:30:55.086465  458793 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:30:55.087068  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:30:55.109456  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:30:55.132443  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:30:55.155710  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:30:55.178700  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:30:55.201782  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:30:55.224111  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:30:55.246873  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:30:55.269685  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:30:55.291569  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:30:55.313679  458793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:30:55.339116  458793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:30:55.356970  458793 ssh_runner.go:195] Run: openssl version
	I0109 00:30:55.362454  458793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:30:55.371925  458793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:55.376540  458793 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:55.376605  458793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:30:55.382112  458793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:30:55.391685  458793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:30:55.401193  458793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:30:55.405973  458793 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:30:55.406012  458793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:30:55.411725  458793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:30:55.423834  458793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:30:55.432826  458793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:30:55.437535  458793 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:30:55.437593  458793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:30:55.443047  458793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:30:55.451860  458793 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:30:55.455959  458793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:30:55.461549  458793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:30:55.467718  458793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:30:55.473127  458793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:30:55.478662  458793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:30:55.484027  458793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:30:55.489430  458793 kubeadm.go:404] StartCluster: {Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false syste
m_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:30:55.489543  458793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:30:55.489592  458793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:30:55.527020  458793 cri.go:89] found id: ""
	I0109 00:30:55.527093  458793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:30:55.536523  458793 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:30:55.536549  458793 kubeadm.go:636] restartCluster start
	I0109 00:30:55.536596  458793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:30:55.545013  458793 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:55.545662  458793 kubeconfig.go:135] verify returned: extract IP: "newest-cni-745275" does not appear in /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:30:55.545883  458793 kubeconfig.go:146] "newest-cni-745275" context is missing from /home/jenkins/minikube-integration/17830-399915/kubeconfig - will repair!
	I0109 00:30:55.546530  458793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:30:55.548121  458793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:30:55.556602  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:55.556665  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:55.570030  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:56.057664  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:56.057769  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:56.072021  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:56.557562  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:56.557675  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:56.570617  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:57.057018  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:57.057101  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:57.069857  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:57.556967  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:57.557067  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:57.571041  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:58.057687  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:58.057809  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:58.070894  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:58.557581  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:58.557657  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:58.571581  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:59.057105  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:59.057214  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:59.071021  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:30:59.557620  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:30:59.557709  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:30:59.571456  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:31:00.056985  458793 api_server.go:166] Checking apiserver status ...
	I0109 00:31:00.057095  458793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:31:00.071136  458793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:09:41 UTC, ends at Tue 2024-01-09 00:31:01 UTC. --
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.502767726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760261502700404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b75c0af5-b49c-48ce-9740-4b4a1d5778b5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.503749544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=988d6fcd-e6c1-410f-a257-bfadc4eab7c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.503830538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=988d6fcd-e6c1-410f-a257-bfadc4eab7c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.504134608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=988d6fcd-e6c1-410f-a257-bfadc4eab7c2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.548202083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=66d3838a-6693-4c27-8bb6-87c4ba43efb8 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.548258120Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=66d3838a-6693-4c27-8bb6-87c4ba43efb8 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.550185541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b512dee6-29e0-47b2-931f-ca301c8e688d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.550863566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760261550842828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b512dee6-29e0-47b2-931f-ca301c8e688d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.555028113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a0757aea-6027-44f0-8fe2-a9da6d4d49f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.555176395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a0757aea-6027-44f0-8fe2-a9da6d4d49f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.555384233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a0757aea-6027-44f0-8fe2-a9da6d4d49f1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.598339022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4afcf65f-86eb-45b3-a02c-544aad6b5ab2 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.598424340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4afcf65f-86eb-45b3-a02c-544aad6b5ab2 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.599719834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=215f6bd3-1f49-4589-b27f-5659abd6b2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.600230239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760261600215078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=215f6bd3-1f49-4589-b27f-5659abd6b2e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.600745944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa97c589-6875-4d74-8885-27aea18369ff name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.600811875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fa97c589-6875-4d74-8885-27aea18369ff name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.601073326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa97c589-6875-4d74-8885-27aea18369ff name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.639170508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=95cfa5ff-eced-45d1-b6da-75b06e1717fd name=/runtime.v1.RuntimeService/Version
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.639254421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=95cfa5ff-eced-45d1-b6da-75b06e1717fd name=/runtime.v1.RuntimeService/Version
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.640453999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f279d041-9136-4895-b78a-c8420163292a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.640899633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760261640885405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f279d041-9136-4895-b78a-c8420163292a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.642193227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd67b4b5-962a-4826-8802-0b11921855c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.642258123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd67b4b5-962a-4826-8802-0b11921855c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:31:01 default-k8s-diff-port-834116 crio[713]: time="2024-01-09 00:31:01.642430606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759054628770386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a342658092873e99c9d58351e7d55938fcee90fa4bffde6e020953f2f5160a17,PodSandboxId:80c1ed307bf19ade346e4f2c66ee9b33531e8e31a8edcbf9afaf9c08707535e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704759030097610020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce0bd577-8a0e-4801-bd3b-190307b70852,},Annotations:map[string]string{io.kubernetes.container.hash: 77101943,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd,PodSandboxId:41ec024e32ec6763f343c55c4e0baff0290a4d983b41e1fc1a133879ca1a7378,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759028712120777,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-csrwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4,},Annotations:map[string]string{io.kubernetes.container.hash: b62817cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57,PodSandboxId:c309a3c21eeb1aabad65573cbae0da98c3dfa53f0c2a7756673247d13876f018,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704759023147518847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 49bd18e5-b0c3-4eaa-83e6-2d347d47e505,},Annotations:map[string]string{io.kubernetes.container.hash: 6dacdfc2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc,PodSandboxId:5b80a8bc106228f70d7e5a732ed0b9b9a5c1bc4b2cab98a4956b21489c6056b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759023317867468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9dmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
bf297f4-2dc1-48b8-9fd6-830c17bf25fc,},Annotations:map[string]string{io.kubernetes.container.hash: e90c8ad9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823,PodSandboxId:2b5c57c5143c75585ba096a6405ded61bf028218f4daecd8207cffde34198fe4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759015119713328,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71774c2e352e1193dcbf9a604298a3d2,},An
notations:map[string]string{io.kubernetes.container.hash: 6ad02e41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c,PodSandboxId:ee81328024e4d7a43ed0bfb83c832aff4a359a06f69804289ac309b7bf86dec9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759014950471104,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd480c6e2b06d72f72e531d976768f51,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46,PodSandboxId:a3bb290f4c4fde2b29a97a9ec7fee35eccfe49b2a0323637016cd196e20ed022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759014690513592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
373de2f78a5671e153568150486552a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc,PodSandboxId:56c2cc0fac5c5fec00e65e5a3a3c2101b64ac08b0fefa79fb278f893fefd8c1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759014512178885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-834116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
7806e3d92b66d3af04b3c64fb7585d2,},Annotations:map[string]string{io.kubernetes.container.hash: 597e394e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd67b4b5-962a-4826-8802-0b11921855c8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a0fd42aafbd15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   c309a3c21eeb1       storage-provisioner
	a342658092873       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   80c1ed307bf19       busybox
	bd1948e3c50bc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   41ec024e32ec6       coredns-5dd5756b68-csrwr
	301f60b371271       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   5b80a8bc10622       kube-proxy-p9dmf
	f2c5c87fdbe85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   c309a3c21eeb1       storage-provisioner
	8cc2cc6a6ffc0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   2b5c57c5143c7       etcd-default-k8s-diff-port-834116
	a457619a25952       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      20 minutes ago      Running             kube-scheduler            1                   ee81328024e4d       kube-scheduler-default-k8s-diff-port-834116
	2a0d4cebebe6e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      20 minutes ago      Running             kube-controller-manager   1                   a3bb290f4c4fd       kube-controller-manager-default-k8s-diff-port-834116
	fc9430c284b97       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      20 minutes ago      Running             kube-apiserver            1                   56c2cc0fac5c5       kube-apiserver-default-k8s-diff-port-834116
	
	
	==> coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46273 - 35817 "HINFO IN 2123054911538060451.4617250452686183186. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036280919s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-834116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-834116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=default-k8s-diff-port-834116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_01_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:01:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-834116
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:30:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:26:09 +0000   Tue, 09 Jan 2024 00:01:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:26:09 +0000   Tue, 09 Jan 2024 00:01:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:26:09 +0000   Tue, 09 Jan 2024 00:01:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:26:09 +0000   Tue, 09 Jan 2024 00:10:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    default-k8s-diff-port-834116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5291690b871f4ffe8e230bce47c5b516
	  System UUID:                5291690b-871f-4ffe-8e23-0bce47c5b516
	  Boot ID:                    995ef9c1-c726-4e38-ac79-d0e4b66e8941
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-csrwr                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-834116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-834116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-834116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-p9dmf                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-834116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-mbf7k                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-834116 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-834116 event: Registered Node default-k8s-diff-port-834116 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-834116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-834116 event: Registered Node default-k8s-diff-port-834116 in Controller
	
	
	==> dmesg <==
	[Jan 9 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.645947] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.730312] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.170046] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.563631] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.511081] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.133780] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.167897] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.120647] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.235724] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Jan 9 00:10] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +21.023692] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] <==
	{"level":"warn","ts":"2024-01-09T00:10:22.620156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"546.843425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-01-09T00:10:22.620628Z","caller":"traceutil/trace.go:171","msg":"trace[763981380] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:523; }","duration":"547.307639ms","start":"2024-01-09T00:10:22.07323Z","end":"2024-01-09T00:10:22.620538Z","steps":["trace[763981380] 'range keys from in-memory index tree'  (duration: 546.750006ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:22.620779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.073217Z","time spent":"547.546169ms","remote":"127.0.0.1:44144","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" "}
	{"level":"warn","ts":"2024-01-09T00:10:22.620936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"581.645425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/\" range_end:\"/registry/deployments/kube-system0\" ","response":"range_response_count:2 size:9309"}
	{"level":"info","ts":"2024-01-09T00:10:22.621651Z","caller":"traceutil/trace.go:171","msg":"trace[1574384919] range","detail":"{range_begin:/registry/deployments/kube-system/; range_end:/registry/deployments/kube-system0; response_count:2; response_revision:523; }","duration":"582.365155ms","start":"2024-01-09T00:10:22.039269Z","end":"2024-01-09T00:10:22.621634Z","steps":["trace[1574384919] 'range keys from in-memory index tree'  (duration: 581.536354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:22.621692Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.039254Z","time spent":"582.425301ms","remote":"127.0.0.1:44200","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":2,"response size":9332,"request content":"key:\"/registry/deployments/kube-system/\" range_end:\"/registry/deployments/kube-system0\" "}
	{"level":"info","ts":"2024-01-09T00:10:23.050163Z","caller":"traceutil/trace.go:171","msg":"trace[566505004] linearizableReadLoop","detail":"{readStateIndex:552; appliedIndex:551; }","duration":"407.572451ms","start":"2024-01-09T00:10:22.642578Z","end":"2024-01-09T00:10:23.05015Z","steps":["trace[566505004] 'read index received'  (duration: 407.374013ms)","trace[566505004] 'applied index is now lower than readState.Index'  (duration: 197.738µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:10:23.050321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.730468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-01-09T00:10:23.050386Z","caller":"traceutil/trace.go:171","msg":"trace[812758295] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/token-cleaner; range_end:; response_count:1; response_revision:523; }","duration":"407.81185ms","start":"2024-01-09T00:10:22.642564Z","end":"2024-01-09T00:10:23.050376Z","steps":["trace[812758295] 'agreement among raft nodes before linearized reading'  (duration: 407.6933ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:23.050439Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.642557Z","time spent":"407.872495ms","remote":"127.0.0.1:44144","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":214,"request content":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" "}
	{"level":"info","ts":"2024-01-09T00:10:23.05085Z","caller":"traceutil/trace.go:171","msg":"trace[844840029] transaction","detail":"{read_only:false; number_of_response:0; response_revision:523; }","duration":"409.100554ms","start":"2024-01-09T00:10:22.641742Z","end":"2024-01-09T00:10:23.050842Z","steps":["trace[844840029] 'process raft request'  (duration: 408.335289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:10:23.05116Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:10:22.641727Z","time spent":"409.179023ms","remote":"127.0.0.1:44176","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:coredns\" value_size:328 >> failure:<>"}
	{"level":"info","ts":"2024-01-09T00:10:23.296351Z","caller":"traceutil/trace.go:171","msg":"trace[926543455] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"133.4495ms","start":"2024-01-09T00:10:23.162889Z","end":"2024-01-09T00:10:23.296339Z","steps":["trace[926543455] 'read index received'  (duration: 133.355795ms)","trace[926543455] 'applied index is now lower than readState.Index'  (duration: 93.296µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:10:23.296513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.619846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-01-09T00:10:23.296555Z","caller":"traceutil/trace.go:171","msg":"trace[434775922] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pvc-protection-controller; range_end:; response_count:1; response_revision:523; }","duration":"133.677564ms","start":"2024-01-09T00:10:23.16287Z","end":"2024-01-09T00:10:23.296548Z","steps":["trace[434775922] 'agreement among raft nodes before linearized reading'  (duration: 133.591456ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:10:23.296731Z","caller":"traceutil/trace.go:171","msg":"trace[815516781] transaction","detail":"{read_only:false; number_of_response:0; response_revision:523; }","duration":"135.862499ms","start":"2024-01-09T00:10:23.160864Z","end":"2024-01-09T00:10:23.296727Z","steps":["trace[815516781] 'process raft request'  (duration: 135.428048ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:20:18.065991Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":862}
	{"level":"info","ts":"2024-01-09T00:20:18.068765Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":862,"took":"2.054428ms","hash":1354971634}
	{"level":"info","ts":"2024-01-09T00:20:18.068851Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1354971634,"revision":862,"compact-revision":-1}
	{"level":"info","ts":"2024-01-09T00:25:18.074455Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1104}
	{"level":"info","ts":"2024-01-09T00:25:18.076565Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1104,"took":"1.780852ms","hash":2925992603}
	{"level":"info","ts":"2024-01-09T00:25:18.076642Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2925992603,"revision":1104,"compact-revision":862}
	{"level":"info","ts":"2024-01-09T00:30:18.08683Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1347}
	{"level":"info","ts":"2024-01-09T00:30:18.088833Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1347,"took":"1.4272ms","hash":2696645465}
	{"level":"info","ts":"2024-01-09T00:30:18.08931Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2696645465,"revision":1347,"compact-revision":1104}
	
	
	==> kernel <==
	 00:31:01 up 21 min,  0 users,  load average: 0.08, 0.17, 0.17
	Linux default-k8s-diff-port-834116 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] <==
	E0109 00:26:21.115535       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:26:21.116787       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:27:19.896666       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0109 00:28:19.897319       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:28:21.115540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:28:21.115612       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:28:21.115634       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:28:21.118089       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:28:21.118177       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:28:21.118187       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:29:19.896056       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0109 00:30:19.897180       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:30:20.120095       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:30:20.120231       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:30:20.120702       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:30:21.121101       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:30:21.121319       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:30:21.121358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:30:21.121247       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:30:21.121424       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:30:21.122623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] <==
	I0109 00:25:04.682489       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:25:34.136408       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:34.691775       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:04.142343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:04.701260       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:34.148612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:34.714621       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:26:46.390060       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="374.513µs"
	I0109 00:26:58.389501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="268.2µs"
	E0109 00:27:04.154187       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:04.723913       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:34.164554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:34.733173       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:04.170922       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:04.742256       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:34.177528       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:34.752664       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:04.183430       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:04.765631       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:34.189089       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:34.774152       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:30:04.194588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:30:04.785547       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:30:34.201144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:30:34.794823       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] <==
	I0109 00:10:23.769296       1 server_others.go:69] "Using iptables proxy"
	I0109 00:10:23.788755       1 node.go:141] Successfully retrieved node IP: 192.168.39.73
	I0109 00:10:23.850374       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:10:23.850677       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:10:23.854843       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:10:23.854917       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:10:23.855298       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:10:23.855338       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:10:23.856253       1 config.go:188] "Starting service config controller"
	I0109 00:10:23.856304       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:10:23.856339       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:10:23.856354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:10:23.858836       1 config.go:315] "Starting node config controller"
	I0109 00:10:23.858949       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:10:23.958591       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:10:23.958738       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:10:23.959790       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] <==
	I0109 00:10:17.954301       1 serving.go:348] Generated self-signed cert in-memory
	W0109 00:10:19.976729       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0109 00:10:19.976875       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:10:19.976922       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0109 00:10:19.977020       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0109 00:10:20.102192       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0109 00:10:20.102302       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:10:20.108804       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0109 00:10:20.109234       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0109 00:10:20.109289       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0109 00:10:20.154875       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 00:10:20.256120       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:09:41 UTC, ends at Tue 2024-01-09 00:31:02 UTC. --
	Jan 09 00:28:13 default-k8s-diff-port-834116 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:28:13 default-k8s-diff-port-834116 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:28:15 default-k8s-diff-port-834116 kubelet[918]: E0109 00:28:15.371108     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:28:27 default-k8s-diff-port-834116 kubelet[918]: E0109 00:28:27.369390     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:28:40 default-k8s-diff-port-834116 kubelet[918]: E0109 00:28:40.369749     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:28:52 default-k8s-diff-port-834116 kubelet[918]: E0109 00:28:52.369127     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:29:04 default-k8s-diff-port-834116 kubelet[918]: E0109 00:29:04.370463     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:29:13 default-k8s-diff-port-834116 kubelet[918]: E0109 00:29:13.503133     918 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:29:13 default-k8s-diff-port-834116 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:29:13 default-k8s-diff-port-834116 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:29:13 default-k8s-diff-port-834116 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:29:17 default-k8s-diff-port-834116 kubelet[918]: E0109 00:29:17.370222     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:29:30 default-k8s-diff-port-834116 kubelet[918]: E0109 00:29:30.369264     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:29:41 default-k8s-diff-port-834116 kubelet[918]: E0109 00:29:41.369634     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:29:52 default-k8s-diff-port-834116 kubelet[918]: E0109 00:29:52.369288     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:30:03 default-k8s-diff-port-834116 kubelet[918]: E0109 00:30:03.369710     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:30:13 default-k8s-diff-port-834116 kubelet[918]: E0109 00:30:13.377138     918 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 09 00:30:13 default-k8s-diff-port-834116 kubelet[918]: E0109 00:30:13.500215     918 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:30:13 default-k8s-diff-port-834116 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:30:13 default-k8s-diff-port-834116 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:30:13 default-k8s-diff-port-834116 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:30:18 default-k8s-diff-port-834116 kubelet[918]: E0109 00:30:18.370630     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:30:31 default-k8s-diff-port-834116 kubelet[918]: E0109 00:30:31.370674     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:30:46 default-k8s-diff-port-834116 kubelet[918]: E0109 00:30:46.368844     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	Jan 09 00:31:01 default-k8s-diff-port-834116 kubelet[918]: E0109 00:31:01.370409     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mbf7k" podUID="61b7ea36-0b24-42e9-9937-d20ea545f63d"
	
	
	==> storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] <==
	I0109 00:10:54.765586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:10:54.777428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:10:54.777503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:11:12.194279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:11:12.195252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-834116_acb4fb37-e836-4f25-8d20-910d7da56b23!
	I0109 00:11:12.196821       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9cd4331f-223f-4ccb-8942-664734695597", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-834116_acb4fb37-e836-4f25-8d20-910d7da56b23 became leader
	I0109 00:11:12.295868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-834116_acb4fb37-e836-4f25-8d20-910d7da56b23!
	
	
	==> storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] <==
	I0109 00:10:23.772413       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0109 00:10:53.774877       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mbf7k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 describe pod metrics-server-57f55c9bc5-mbf7k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-834116 describe pod metrics-server-57f55c9bc5-mbf7k: exit status 1 (61.99144ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mbf7k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-834116 describe pod metrics-server-57f55c9bc5-mbf7k: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (435.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (168.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:26:28.295634  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:26:37.766590  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:27:20.222923  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003293 -n old-k8s-version-003293
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:29:07.515483314 +0000 UTC m=+5846.392433545
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-003293 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-003293 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.729µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-003293 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-003293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-003293 logs -n 25: (1.729111311s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo cat                              | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:05:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:05:27.711531  452488 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:05:27.711728  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711742  452488 out.go:309] Setting ErrFile to fd 2...
	I0109 00:05:27.711750  452488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:05:27.711982  452488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:05:27.712562  452488 out.go:303] Setting JSON to false
	I0109 00:05:27.713635  452488 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17254,"bootTime":1704741474,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:05:27.713709  452488 start.go:138] virtualization: kvm guest
	I0109 00:05:27.716110  452488 out.go:177] * [default-k8s-diff-port-834116] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:05:27.718021  452488 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:05:27.719311  452488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:05:27.718049  452488 notify.go:220] Checking for updates...
	I0109 00:05:27.720754  452488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:05:27.722073  452488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:05:27.723496  452488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:05:27.724923  452488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:05:27.726663  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:05:27.727158  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.727261  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.741812  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0109 00:05:27.742300  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.742911  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.742943  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.743249  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.743438  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.743694  452488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:05:27.743987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:05:27.744027  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:05:27.758231  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35027
	I0109 00:05:27.758620  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:05:27.759039  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:05:27.759069  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:05:27.759349  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:05:27.759570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:05:27.797687  452488 out.go:177] * Using the kvm2 driver based on existing profile
	I0109 00:05:27.799282  452488 start.go:298] selected driver: kvm2
	I0109 00:05:27.799301  452488 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.799485  452488 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:05:27.800156  452488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.800240  452488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:05:27.815851  452488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:05:27.816303  452488 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:05:27.816371  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:05:27.816384  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:05:27.816406  452488 start_flags.go:323] config:
	{Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-83411
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:05:27.816592  452488 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:05:27.818643  452488 out.go:177] * Starting control plane node default-k8s-diff-port-834116 in cluster default-k8s-diff-port-834116
	I0109 00:05:30.179677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:27.820207  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:05:27.820246  452488 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0109 00:05:27.820258  452488 cache.go:56] Caching tarball of preloaded images
	I0109 00:05:27.820344  452488 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:05:27.820354  452488 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:05:27.820455  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:05:27.820632  452488 start.go:365] acquiring machines lock for default-k8s-diff-port-834116: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:05:33.251703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:39.331707  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:42.403645  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:48.483635  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:51.555692  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:05:57.635653  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:00.707722  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:06.787696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:09.859664  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:15.939733  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:19.011687  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:25.091759  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:28.163666  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:34.243673  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:37.315693  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:43.395652  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:46.467622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:52.547639  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:06:55.619655  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:01.699734  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:04.771686  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:10.851703  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:13.923711  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:20.003883  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:23.075726  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:29.155735  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:32.227698  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:38.307696  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:41.379724  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:47.459727  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:50.531708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:56.611621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:07:59.683677  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:05.763622  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:08.835708  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:14.915674  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:17.987706  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:24.067730  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:27.139621  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:33.219667  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:36.291651  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:42.371678  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:45.443660  451943 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.81:22: connect: no route to host
	I0109 00:08:48.448024  451984 start.go:369] acquired machines lock for "embed-certs-845373" in 4m36.156097213s
	I0109 00:08:48.448197  451984 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:08:48.448239  451984 fix.go:54] fixHost starting: 
	I0109 00:08:48.448769  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:08:48.448810  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:08:48.464359  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0109 00:08:48.465014  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:08:48.465634  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:08:48.465669  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:08:48.466022  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:08:48.466241  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:08:48.466431  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:08:48.468132  451984 fix.go:102] recreateIfNeeded on embed-certs-845373: state=Stopped err=<nil>
	I0109 00:08:48.468162  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	W0109 00:08:48.468339  451984 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:08:48.470346  451984 out.go:177] * Restarting existing kvm2 VM for "embed-certs-845373" ...
	I0109 00:08:48.445374  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:08:48.445415  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:08:48.447757  451943 machine.go:91] provisioned docker machine in 4m37.407825673s
	I0109 00:08:48.447823  451943 fix.go:56] fixHost completed within 4m37.428599196s
	I0109 00:08:48.447831  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 4m37.428619873s
	W0109 00:08:48.447876  451943 start.go:694] error starting host: provision: host is not running
	W0109 00:08:48.448289  451943 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0109 00:08:48.448305  451943 start.go:709] Will try again in 5 seconds ...
	I0109 00:08:48.471819  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Start
	I0109 00:08:48.471966  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring networks are active...
	I0109 00:08:48.472753  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network default is active
	I0109 00:08:48.473111  451984 main.go:141] libmachine: (embed-certs-845373) Ensuring network mk-embed-certs-845373 is active
	I0109 00:08:48.473441  451984 main.go:141] libmachine: (embed-certs-845373) Getting domain xml...
	I0109 00:08:48.474114  451984 main.go:141] libmachine: (embed-certs-845373) Creating domain...
	I0109 00:08:49.716628  451984 main.go:141] libmachine: (embed-certs-845373) Waiting to get IP...
	I0109 00:08:49.717606  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.718022  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.718080  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.717994  452995 retry.go:31] will retry after 247.787821ms: waiting for machine to come up
	I0109 00:08:49.967655  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:49.968169  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:49.968203  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:49.968101  452995 retry.go:31] will retry after 339.65094ms: waiting for machine to come up
	I0109 00:08:50.309542  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.310008  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.310041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.309944  452995 retry.go:31] will retry after 475.654088ms: waiting for machine to come up
	I0109 00:08:50.787560  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:50.787930  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:50.787973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:50.787876  452995 retry.go:31] will retry after 437.198744ms: waiting for machine to come up
	I0109 00:08:51.226414  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.226866  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.226901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.226817  452995 retry.go:31] will retry after 501.606265ms: waiting for machine to come up
	I0109 00:08:51.730571  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:51.731041  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:51.731084  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:51.730949  452995 retry.go:31] will retry after 707.547375ms: waiting for machine to come up
	I0109 00:08:53.450389  451943 start.go:365] acquiring machines lock for old-k8s-version-003293: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:08:52.440038  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:52.440373  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:52.440434  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:52.440330  452995 retry.go:31] will retry after 1.02016439s: waiting for machine to come up
	I0109 00:08:53.462628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:53.463090  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:53.463120  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:53.463037  452995 retry.go:31] will retry after 1.322196175s: waiting for machine to come up
	I0109 00:08:54.786979  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:54.787514  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:54.787540  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:54.787465  452995 retry.go:31] will retry after 1.260135214s: waiting for machine to come up
	I0109 00:08:56.049973  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:56.050450  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:56.050478  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:56.050415  452995 retry.go:31] will retry after 1.476819521s: waiting for machine to come up
	I0109 00:08:57.529060  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:08:57.529497  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:08:57.529527  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:08:57.529444  452995 retry.go:31] will retry after 2.830903204s: waiting for machine to come up
	I0109 00:09:00.362901  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:00.363333  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:00.363372  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:00.363292  452995 retry.go:31] will retry after 3.093040214s: waiting for machine to come up
	I0109 00:09:03.460541  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:03.461066  451984 main.go:141] libmachine: (embed-certs-845373) DBG | unable to find current IP address of domain embed-certs-845373 in network mk-embed-certs-845373
	I0109 00:09:03.461103  451984 main.go:141] libmachine: (embed-certs-845373) DBG | I0109 00:09:03.461032  452995 retry.go:31] will retry after 3.190401984s: waiting for machine to come up
	I0109 00:09:06.654729  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655295  451984 main.go:141] libmachine: (embed-certs-845373) Found IP for machine: 192.168.50.132
	I0109 00:09:06.655331  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has current primary IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.655343  451984 main.go:141] libmachine: (embed-certs-845373) Reserving static IP address...
	I0109 00:09:06.655828  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.655851  451984 main.go:141] libmachine: (embed-certs-845373) DBG | skip adding static IP to network mk-embed-certs-845373 - found existing host DHCP lease matching {name: "embed-certs-845373", mac: "52:54:00:5b:26:23", ip: "192.168.50.132"}
	I0109 00:09:06.655865  451984 main.go:141] libmachine: (embed-certs-845373) Reserved static IP address: 192.168.50.132
	I0109 00:09:06.655880  451984 main.go:141] libmachine: (embed-certs-845373) Waiting for SSH to be available...
	I0109 00:09:06.655969  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Getting to WaitForSSH function...
	I0109 00:09:06.658083  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658468  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.658501  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.658615  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH client type: external
	I0109 00:09:06.658650  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa (-rw-------)
	I0109 00:09:06.658704  451984 main.go:141] libmachine: (embed-certs-845373) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:06.658725  451984 main.go:141] libmachine: (embed-certs-845373) DBG | About to run SSH command:
	I0109 00:09:06.658741  451984 main.go:141] libmachine: (embed-certs-845373) DBG | exit 0
	I0109 00:09:06.751337  451984 main.go:141] libmachine: (embed-certs-845373) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:06.751683  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetConfigRaw
	I0109 00:09:06.752338  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:06.754749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755133  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.755161  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.755475  451984 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/config.json ...
	I0109 00:09:06.755689  451984 machine.go:88] provisioning docker machine ...
	I0109 00:09:06.755710  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:06.755939  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756108  451984 buildroot.go:166] provisioning hostname "embed-certs-845373"
	I0109 00:09:06.756133  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:06.756287  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.758391  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758651  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.758678  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.758821  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.759026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759151  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.759276  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.759419  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.759891  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.759906  451984 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-845373 && echo "embed-certs-845373" | sudo tee /etc/hostname
	I0109 00:09:06.897829  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845373
	
	I0109 00:09:06.897862  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:06.900776  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901151  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:06.901194  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:06.901354  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:06.901601  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901767  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:06.901930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:06.902093  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:06.902429  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:06.902457  451984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-845373' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-845373/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-845373' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:07.035051  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:07.035088  451984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:07.035106  451984 buildroot.go:174] setting up certificates
	I0109 00:09:07.035141  451984 provision.go:83] configureAuth start
	I0109 00:09:07.035150  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetMachineName
	I0109 00:09:07.035470  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.038830  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039185  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.039216  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.039473  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.041628  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.041978  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.042006  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.042138  451984 provision.go:138] copyHostCerts
	I0109 00:09:07.042215  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:07.042235  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:07.042301  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:07.042386  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:07.042394  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:07.042420  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:07.042547  451984 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:07.042557  451984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:07.042582  451984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:07.042658  451984 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.embed-certs-845373 san=[192.168.50.132 192.168.50.132 localhost 127.0.0.1 minikube embed-certs-845373]
	I0109 00:09:07.146928  451984 provision.go:172] copyRemoteCerts
	I0109 00:09:07.147000  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:07.147026  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.149665  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.149999  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.150025  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.150190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.150402  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.150624  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.150778  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.912619  452237 start.go:369] acquired machines lock for "no-preload-378213" in 4m22.586847609s
	I0109 00:09:07.912695  452237 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:07.912705  452237 fix.go:54] fixHost starting: 
	I0109 00:09:07.913160  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:07.913205  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:07.929558  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0109 00:09:07.930071  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:07.930620  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:09:07.930646  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:07.931015  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:07.931232  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:07.931421  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:09:07.933075  452237 fix.go:102] recreateIfNeeded on no-preload-378213: state=Stopped err=<nil>
	I0109 00:09:07.933114  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	W0109 00:09:07.933281  452237 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:07.935418  452237 out.go:177] * Restarting existing kvm2 VM for "no-preload-378213" ...
	I0109 00:09:07.246432  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:07.270463  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0109 00:09:07.294094  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:09:07.317414  451984 provision.go:86] duration metric: configureAuth took 282.256583ms
	I0109 00:09:07.317462  451984 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:07.317651  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:07.317743  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.320246  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320529  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.320557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.320724  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.320930  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321068  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.321199  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.321480  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.321807  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.321831  451984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:07.649960  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:07.649991  451984 machine.go:91] provisioned docker machine in 894.285072ms
	I0109 00:09:07.650005  451984 start.go:300] post-start starting for "embed-certs-845373" (driver="kvm2")
	I0109 00:09:07.650020  451984 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:07.650052  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.650505  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:07.650537  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.653343  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653671  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.653695  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.653913  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.654147  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.654345  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.654548  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.745211  451984 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:07.749547  451984 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:07.749608  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:07.749694  451984 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:07.749790  451984 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:07.749906  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:07.758232  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:07.781504  451984 start.go:303] post-start completed in 131.476813ms
	I0109 00:09:07.781532  451984 fix.go:56] fixHost completed within 19.333293059s
	I0109 00:09:07.781556  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.784365  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.784751  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.784774  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.785021  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.785267  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785430  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.785570  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.785745  451984 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:07.786073  451984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.132 22 <nil> <nil>}
	I0109 00:09:07.786085  451984 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:07.912423  451984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758947.859859847
	
	I0109 00:09:07.912452  451984 fix.go:206] guest clock: 1704758947.859859847
	I0109 00:09:07.912462  451984 fix.go:219] Guest: 2024-01-09 00:09:07.859859847 +0000 UTC Remote: 2024-01-09 00:09:07.781536446 +0000 UTC m=+295.641408793 (delta=78.323401ms)
	I0109 00:09:07.912487  451984 fix.go:190] guest clock delta is within tolerance: 78.323401ms
	I0109 00:09:07.912494  451984 start.go:83] releasing machines lock for "embed-certs-845373", held for 19.464424699s
	I0109 00:09:07.912529  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.912827  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:07.915749  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916146  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.916177  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.916358  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.916865  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917042  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:09:07.917155  451984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:07.917208  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.917263  451984 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:07.917288  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:09:07.920121  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920158  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920573  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920608  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:07.920626  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920648  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:07.920703  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920858  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:09:07.920942  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921034  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:09:07.921122  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921185  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:09:07.921263  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:07.921282  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:09:08.040953  451984 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:08.046882  451984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:08.204801  451984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:08.214653  451984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:08.214741  451984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:08.232714  451984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:08.232750  451984 start.go:475] detecting cgroup driver to use...
	I0109 00:09:08.232881  451984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:08.254408  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:08.266926  451984 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:08.267015  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:08.278971  451984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:08.291982  451984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:08.395029  451984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:08.514444  451984 docker.go:219] disabling docker service ...
	I0109 00:09:08.514527  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:08.528548  451984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:08.540899  451984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:08.669118  451984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:08.776487  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:08.791617  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:08.809437  451984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:08.809509  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.818817  451984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:08.818891  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.828374  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.839820  451984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:08.849449  451984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:08.858899  451984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:08.869295  451984 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:08.869377  451984 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:08.885387  451984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:08.895106  451984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:09.007897  451984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:09.197656  451984 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:09.197737  451984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:09.203174  451984 start.go:543] Will wait 60s for crictl version
	I0109 00:09:09.203264  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:09:09.207312  451984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:09.245917  451984 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:09.245996  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.296410  451984 ssh_runner.go:195] Run: crio --version
	I0109 00:09:09.345334  451984 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:07.937023  452237 main.go:141] libmachine: (no-preload-378213) Calling .Start
	I0109 00:09:07.937229  452237 main.go:141] libmachine: (no-preload-378213) Ensuring networks are active...
	I0109 00:09:07.938093  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network default is active
	I0109 00:09:07.938504  452237 main.go:141] libmachine: (no-preload-378213) Ensuring network mk-no-preload-378213 is active
	I0109 00:09:07.938868  452237 main.go:141] libmachine: (no-preload-378213) Getting domain xml...
	I0109 00:09:07.939609  452237 main.go:141] libmachine: (no-preload-378213) Creating domain...
	I0109 00:09:09.254019  452237 main.go:141] libmachine: (no-preload-378213) Waiting to get IP...
	I0109 00:09:09.254967  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.255375  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.255465  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.255333  453115 retry.go:31] will retry after 260.636384ms: waiting for machine to come up
	I0109 00:09:09.518054  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.518563  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.518590  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.518522  453115 retry.go:31] will retry after 320.770806ms: waiting for machine to come up
	I0109 00:09:09.841203  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:09.841675  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:09.841710  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:09.841604  453115 retry.go:31] will retry after 317.226014ms: waiting for machine to come up
	I0109 00:09:10.160137  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.160545  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.160576  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.160522  453115 retry.go:31] will retry after 452.723717ms: waiting for machine to come up
	I0109 00:09:09.346886  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetIP
	I0109 00:09:09.350050  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350407  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:09:09.350440  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:09:09.350626  451984 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:09.354884  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:09.367669  451984 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:09.367765  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:09.407793  451984 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:09.407876  451984 ssh_runner.go:195] Run: which lz4
	I0109 00:09:09.412172  451984 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:09.416303  451984 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:09.416331  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:11.408967  451984 crio.go:444] Took 1.996823 seconds to copy over tarball
	I0109 00:09:11.409067  451984 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:10.615452  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:10.615971  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:10.615999  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:10.615922  453115 retry.go:31] will retry after 555.714359ms: waiting for machine to come up
	I0109 00:09:11.173767  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:11.174269  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:11.174301  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:11.174220  453115 retry.go:31] will retry after 843.630815ms: waiting for machine to come up
	I0109 00:09:12.019354  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:12.019896  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:12.019962  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:12.019884  453115 retry.go:31] will retry after 1.083324701s: waiting for machine to come up
	I0109 00:09:13.104954  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:13.105499  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:13.105535  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:13.105442  453115 retry.go:31] will retry after 1.445208328s: waiting for machine to come up
	I0109 00:09:14.552723  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:14.553247  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:14.553278  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:14.553202  453115 retry.go:31] will retry after 1.207345182s: waiting for machine to come up
	I0109 00:09:14.301519  451984 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.892406004s)
	I0109 00:09:14.301567  451984 crio.go:451] Took 2.892564 seconds to extract the tarball
	I0109 00:09:14.301579  451984 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:09:14.344103  451984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:14.399048  451984 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:09:14.399072  451984 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:09:14.399160  451984 ssh_runner.go:195] Run: crio config
	I0109 00:09:14.459603  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:14.459643  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:14.459693  451984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:14.459752  451984 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.132 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-845373 NodeName:embed-certs-845373 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:14.460006  451984 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-845373"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:14.460098  451984 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-845373 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:14.460176  451984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:09:14.469269  451984 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:14.469363  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:14.479156  451984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0109 00:09:14.496058  451984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:09:14.513299  451984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0109 00:09:14.530721  451984 ssh_runner.go:195] Run: grep 192.168.50.132	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:14.534849  451984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:14.546999  451984 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373 for IP: 192.168.50.132
	I0109 00:09:14.547045  451984 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:14.547259  451984 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:14.547310  451984 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:14.547456  451984 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/client.key
	I0109 00:09:14.547531  451984 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key.073edd3d
	I0109 00:09:14.547584  451984 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key
	I0109 00:09:14.547733  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:14.547770  451984 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:14.547778  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:14.547803  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:14.547822  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:14.547851  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:14.547891  451984 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:14.548888  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:14.574032  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:14.599543  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:14.625213  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/embed-certs-845373/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:09:14.650001  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:14.675008  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:14.699179  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:14.722451  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:14.746559  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:14.769631  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:14.792906  451984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:14.815748  451984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:14.832389  451984 ssh_runner.go:195] Run: openssl version
	I0109 00:09:14.840602  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:14.856001  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862098  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.862187  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:14.868184  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:14.879131  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:14.890092  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894911  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.894969  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:14.900490  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:14.912056  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:14.923126  451984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.927937  451984 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.928024  451984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:14.933646  451984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:14.944658  451984 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:14.949507  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:14.956040  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:14.962180  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:14.968224  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:14.974087  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:14.980079  451984 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:14.986029  451984 kubeadm.go:404] StartCluster: {Name:embed-certs-845373 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-845373 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:14.986148  451984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:14.986202  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:15.027950  451984 cri.go:89] found id: ""
	I0109 00:09:15.028035  451984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:15.039282  451984 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:15.039314  451984 kubeadm.go:636] restartCluster start
	I0109 00:09:15.039430  451984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:15.049695  451984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.050930  451984 kubeconfig.go:92] found "embed-certs-845373" server: "https://192.168.50.132:8443"
	I0109 00:09:15.053805  451984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:15.064953  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.065018  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.078921  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.565496  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:15.565626  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:15.578601  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.065133  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.065227  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.077749  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:16.565317  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:16.565425  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:16.578351  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:17.065861  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.065998  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.078781  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:15.762565  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:15.762982  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:15.763010  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:15.762909  453115 retry.go:31] will retry after 2.319709932s: waiting for machine to come up
	I0109 00:09:18.083780  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:18.084295  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:18.084330  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:18.084224  453115 retry.go:31] will retry after 2.101421106s: waiting for machine to come up
	I0109 00:09:20.188389  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:20.188770  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:20.188804  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:20.188712  453115 retry.go:31] will retry after 2.578747646s: waiting for machine to come up
	I0109 00:09:17.565567  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:17.565690  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:17.578496  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.065006  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.065120  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.078249  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:18.565568  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:18.565732  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:18.582691  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.065249  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.065353  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.082433  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:19.564998  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:19.565129  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:19.582026  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.065462  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.065563  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.079586  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:20.565150  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:20.565253  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:20.581576  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.065135  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.065246  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.080231  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:21.565856  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:21.566034  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:21.582980  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.065130  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.065245  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.078868  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:22.769370  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:22.769835  452237 main.go:141] libmachine: (no-preload-378213) DBG | unable to find current IP address of domain no-preload-378213 in network mk-no-preload-378213
	I0109 00:09:22.769877  452237 main.go:141] libmachine: (no-preload-378213) DBG | I0109 00:09:22.769775  453115 retry.go:31] will retry after 4.446013118s: waiting for machine to come up
	I0109 00:09:22.565774  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:22.565850  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:22.581869  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.065381  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.065511  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.078260  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:23.565069  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:23.565171  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:23.577588  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.065102  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.065184  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.077356  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:24.565990  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:24.566090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:24.578416  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.065960  451984 api_server.go:166] Checking apiserver status ...
	I0109 00:09:25.066090  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:25.078618  451984 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:25.078652  451984 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:25.078665  451984 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:25.078689  451984 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:25.078759  451984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:25.117213  451984 cri.go:89] found id: ""
	I0109 00:09:25.117304  451984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:25.133313  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:25.142683  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:25.142755  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152228  451984 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:25.152252  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:25.273216  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.323239  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.049977221s)
	I0109 00:09:26.323274  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.531333  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.605976  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:26.691914  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:26.692006  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.408538  452488 start.go:369] acquired machines lock for "default-k8s-diff-port-834116" in 4m0.587839533s
	I0109 00:09:28.408614  452488 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:28.408627  452488 fix.go:54] fixHost starting: 
	I0109 00:09:28.409094  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:28.409147  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:28.426990  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0109 00:09:28.427467  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:28.428010  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:09:28.428043  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:28.428413  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:28.428726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:28.428887  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:09:28.430477  452488 fix.go:102] recreateIfNeeded on default-k8s-diff-port-834116: state=Stopped err=<nil>
	I0109 00:09:28.430508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	W0109 00:09:28.430658  452488 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:28.432612  452488 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-834116" ...
	I0109 00:09:27.220872  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221372  452237 main.go:141] libmachine: (no-preload-378213) Found IP for machine: 192.168.61.62
	I0109 00:09:27.221401  452237 main.go:141] libmachine: (no-preload-378213) Reserving static IP address...
	I0109 00:09:27.221416  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has current primary IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.221769  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.221820  452237 main.go:141] libmachine: (no-preload-378213) DBG | skip adding static IP to network mk-no-preload-378213 - found existing host DHCP lease matching {name: "no-preload-378213", mac: "52:54:00:34:ef:49", ip: "192.168.61.62"}
	I0109 00:09:27.221842  452237 main.go:141] libmachine: (no-preload-378213) Reserved static IP address: 192.168.61.62
	I0109 00:09:27.221859  452237 main.go:141] libmachine: (no-preload-378213) Waiting for SSH to be available...
	I0109 00:09:27.221877  452237 main.go:141] libmachine: (no-preload-378213) DBG | Getting to WaitForSSH function...
	I0109 00:09:27.224260  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224609  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.224643  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.224762  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH client type: external
	I0109 00:09:27.224792  452237 main.go:141] libmachine: (no-preload-378213) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa (-rw-------)
	I0109 00:09:27.224822  452237 main.go:141] libmachine: (no-preload-378213) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:27.224832  452237 main.go:141] libmachine: (no-preload-378213) DBG | About to run SSH command:
	I0109 00:09:27.224841  452237 main.go:141] libmachine: (no-preload-378213) DBG | exit 0
	I0109 00:09:27.315335  452237 main.go:141] libmachine: (no-preload-378213) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:27.315823  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetConfigRaw
	I0109 00:09:27.316473  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.319014  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319305  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.319339  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.319673  452237 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/config.json ...
	I0109 00:09:27.319916  452237 machine.go:88] provisioning docker machine ...
	I0109 00:09:27.319939  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:27.320167  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320354  452237 buildroot.go:166] provisioning hostname "no-preload-378213"
	I0109 00:09:27.320378  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.320575  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.322760  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323156  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.323187  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.323317  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.323542  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323711  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.323869  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.324061  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.324556  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.324577  452237 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-378213 && echo "no-preload-378213" | sudo tee /etc/hostname
	I0109 00:09:27.452901  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-378213
	
	I0109 00:09:27.452957  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.456295  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456636  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.456693  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.456919  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.457140  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457343  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.457491  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.457671  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.458159  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.458188  452237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-378213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-378213/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-378213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:27.579589  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:27.579626  452237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:27.579658  452237 buildroot.go:174] setting up certificates
	I0109 00:09:27.579674  452237 provision.go:83] configureAuth start
	I0109 00:09:27.579688  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetMachineName
	I0109 00:09:27.580039  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:27.583100  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583557  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.583592  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.583759  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.586482  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.586816  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.586862  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.587019  452237 provision.go:138] copyHostCerts
	I0109 00:09:27.587091  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:27.587105  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:27.587162  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:27.587246  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:27.587256  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:27.587276  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:27.587326  452237 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:27.587333  452237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:27.587350  452237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:27.587423  452237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.no-preload-378213 san=[192.168.61.62 192.168.61.62 localhost 127.0.0.1 minikube no-preload-378213]
	I0109 00:09:27.642093  452237 provision.go:172] copyRemoteCerts
	I0109 00:09:27.642159  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:27.642186  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.645245  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645702  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.645736  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.645959  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.646180  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.646367  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.646552  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:27.740878  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:09:27.770934  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:27.794548  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:27.819155  452237 provision.go:86] duration metric: configureAuth took 239.463059ms
	I0109 00:09:27.819191  452237 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:27.819452  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:09:27.819556  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:27.822793  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823249  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:27.823282  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:27.823482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:27.823666  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823812  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:27.823943  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:27.824098  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:27.824547  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:27.824575  452237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:28.155878  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:28.155939  452237 machine.go:91] provisioned docker machine in 835.996764ms
	I0109 00:09:28.155955  452237 start.go:300] post-start starting for "no-preload-378213" (driver="kvm2")
	I0109 00:09:28.155975  452237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:28.156002  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.156370  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:28.156408  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.159411  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.159831  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.159863  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.160134  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.160347  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.160553  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.160700  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.249092  452237 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:28.253686  452237 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:28.253721  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:28.253812  452237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:28.253914  452237 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:28.254042  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:28.262550  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:28.286467  452237 start.go:303] post-start completed in 130.492214ms
	I0109 00:09:28.286497  452237 fix.go:56] fixHost completed within 20.373793038s
	I0109 00:09:28.286527  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.289569  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290022  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.290056  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.290374  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.290619  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.290815  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.291040  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.291256  452237 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:28.291770  452237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.62 22 <nil> <nil>}
	I0109 00:09:28.291788  452237 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:28.408354  452237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758968.353872845
	
	I0109 00:09:28.408384  452237 fix.go:206] guest clock: 1704758968.353872845
	I0109 00:09:28.408392  452237 fix.go:219] Guest: 2024-01-09 00:09:28.353872845 +0000 UTC Remote: 2024-01-09 00:09:28.286503221 +0000 UTC m=+283.122022206 (delta=67.369624ms)
	I0109 00:09:28.408411  452237 fix.go:190] guest clock delta is within tolerance: 67.369624ms
	I0109 00:09:28.408416  452237 start.go:83] releasing machines lock for "no-preload-378213", held for 20.495748993s
	I0109 00:09:28.408448  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.408745  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:28.411951  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412357  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.412395  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.412550  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413258  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413495  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:09:28.413588  452237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:28.413639  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.414067  452237 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:28.414125  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:09:28.416878  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417049  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417271  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417292  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417482  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417550  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:28.417710  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.417720  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:09:28.417771  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:28.417896  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.417935  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:09:28.418108  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:09:28.418105  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.418226  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:09:28.533738  452237 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:28.541801  452237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:28.692517  452237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:28.700384  452237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:28.700455  452237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:28.720264  452237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:28.720300  452237 start.go:475] detecting cgroup driver to use...
	I0109 00:09:28.720376  452237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:28.739758  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:28.755682  452237 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:28.755754  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:28.772178  452237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:28.792261  452237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:28.908562  452237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:29.042390  452237 docker.go:219] disabling docker service ...
	I0109 00:09:29.042528  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:29.055964  452237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:29.071788  452237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:29.191963  452237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:29.322608  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:29.336149  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:29.357616  452237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:29.357765  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.372357  452237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:29.372436  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.393266  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.405729  452237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:29.417114  452237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:29.428259  452237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:29.440397  452237 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:29.440499  452237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:29.454482  452237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:29.467600  452237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:29.590644  452237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:29.786115  452237 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:29.786205  452237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:29.793049  452237 start.go:543] Will wait 60s for crictl version
	I0109 00:09:29.793129  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:29.798630  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:29.847758  452237 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:29.847850  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.905071  452237 ssh_runner.go:195] Run: crio --version
	I0109 00:09:29.963992  452237 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:09:29.965790  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetIP
	I0109 00:09:29.969222  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969638  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:09:29.969687  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:09:29.969930  452237 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:29.974709  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:29.989617  452237 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:09:29.989667  452237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:30.034776  452237 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:09:30.034804  452237 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.034911  452237 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0109 00:09:30.034925  452237 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.034948  452237 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.035060  452237 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.034894  452237 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.034904  452237 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.035172  452237 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036679  452237 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.036727  452237 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.036737  452237 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.036788  452237 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0109 00:09:30.036814  452237 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.036730  452237 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.036846  452237 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.036678  452237 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.208127  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:27.192095  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:27.692608  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.192176  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:28.692194  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.192059  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:29.219995  451984 api_server.go:72] duration metric: took 2.528085009s to wait for apiserver process to appear ...
	I0109 00:09:29.220032  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:09:29.220058  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:28.434238  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Start
	I0109 00:09:28.434453  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring networks are active...
	I0109 00:09:28.435324  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network default is active
	I0109 00:09:28.435804  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Ensuring network mk-default-k8s-diff-port-834116 is active
	I0109 00:09:28.436322  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Getting domain xml...
	I0109 00:09:28.437072  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Creating domain...
	I0109 00:09:29.958911  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting to get IP...
	I0109 00:09:29.959938  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960820  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:29.960896  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:29.960822  453241 retry.go:31] will retry after 210.498897ms: waiting for machine to come up
	I0109 00:09:30.173307  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173717  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.173752  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.173670  453241 retry.go:31] will retry after 342.664675ms: waiting for machine to come up
	I0109 00:09:30.518442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.519113  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.519069  453241 retry.go:31] will retry after 411.240969ms: waiting for machine to come up
	I0109 00:09:30.931762  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932152  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:30.932182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:30.932104  453241 retry.go:31] will retry after 402.965268ms: waiting for machine to come up
	I0109 00:09:31.336957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337426  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.337459  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.337393  453241 retry.go:31] will retry after 626.321347ms: waiting for machine to come up
	I0109 00:09:31.965071  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965632  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:31.965665  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:31.965592  453241 retry.go:31] will retry after 787.166947ms: waiting for machine to come up
	I0109 00:09:30.217603  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0109 00:09:30.234877  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.243097  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.258262  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.273678  452237 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0109 00:09:30.273761  452237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.273826  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.278909  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.285277  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.289552  452237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.430758  452237 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0109 00:09:30.430813  452237 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.430866  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.430995  452237 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0109 00:09:30.431023  452237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.431061  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456561  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0109 00:09:30.456591  452237 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0109 00:09:30.456636  452237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.456690  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456722  452237 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0109 00:09:30.456757  452237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.456791  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.456911  452237 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0109 00:09:30.456945  452237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.456976  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.482028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0109 00:09:30.482298  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0109 00:09:30.482547  452237 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0109 00:09:30.482694  452237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.482754  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:09:30.518754  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0109 00:09:30.518899  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:09:30.518966  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0109 00:09:30.519317  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0109 00:09:30.519422  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.629846  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630082  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630145  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:30.630189  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:30.630022  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0109 00:09:30.630280  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:30.630028  452237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0109 00:09:30.657819  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657907  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0109 00:09:30.657966  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:30.657824  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0109 00:09:30.658025  452237 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658053  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:30.658084  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0109 00:09:30.658091  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0109 00:09:30.658142  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0109 00:09:30.658173  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0109 00:09:30.714523  452237 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:30.714654  452237 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:32.867027  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.208889866s)
	I0109 00:09:32.867091  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0109 00:09:32.867107  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.209103985s)
	I0109 00:09:32.867122  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:32.867141  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867187  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.209109716s)
	I0109 00:09:32.867221  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0109 00:09:32.867220  452237 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.15254199s)
	I0109 00:09:32.867251  452237 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0109 00:09:32.867190  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0109 00:09:35.150432  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.283143174s)
	I0109 00:09:35.150478  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0109 00:09:35.150509  452237 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:35.150560  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0109 00:09:34.179483  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.179518  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.179533  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.210742  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:09:34.210780  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:09:34.220940  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.259813  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.259869  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:34.720337  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:34.733062  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:34.733105  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.220599  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.228775  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:09:35.228814  451984 api_server.go:103] status: https://192.168.50.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:09:35.720241  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:09:35.725882  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:09:35.736706  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:09:35.736745  451984 api_server.go:131] duration metric: took 6.516702561s to wait for apiserver health ...
	I0109 00:09:35.736790  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:09:35.736811  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:35.739014  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:09:35.740624  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:09:35.776055  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:09:35.814280  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:09:35.832281  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:09:35.832330  451984 system_pods.go:61] "coredns-5dd5756b68-vkd62" [c676d069-cca7-428c-8eec-026ecea14be2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:09:35.832342  451984 system_pods.go:61] "etcd-embed-certs-845373" [92d4616d-126c-4ee9-9475-9d0c790090c3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:09:35.832354  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [9663f585-eca1-4f8f-8a93-aea9b4e98c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:09:35.832368  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [41b4ce59-d838-4798-b593-93c7c8573733] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:09:35.832383  451984 system_pods.go:61] "kube-proxy-tbzpb" [132469d5-d267-4869-ad09-c9fba8d0f9d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:09:35.832398  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [336147ec-8318-496b-986d-55845e7dd9a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:09:35.832408  451984 system_pods.go:61] "metrics-server-57f55c9bc5-2p4js" [c37e24f3-c50b-4169-9d0b-48e21072a114] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:09:35.832421  451984 system_pods.go:61] "storage-provisioner" [e558d9f2-6d92-41d6-82bf-194f53ead52c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:09:35.832436  451984 system_pods.go:74] duration metric: took 18.123808ms to wait for pod list to return data ...
	I0109 00:09:35.832451  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:09:35.836031  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:09:35.836180  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:09:35.836225  451984 node_conditions.go:105] duration metric: took 3.766883ms to run NodePressure ...
	I0109 00:09:35.836250  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:36.192967  451984 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198294  451984 kubeadm.go:787] kubelet initialised
	I0109 00:09:36.198327  451984 kubeadm.go:788] duration metric: took 5.32566ms waiting for restarted kubelet to initialise ...
	I0109 00:09:36.198373  451984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:09:36.205198  451984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:36.230481  451984 pod_ready.go:97] node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230560  451984 pod_ready.go:81] duration metric: took 25.328027ms waiting for pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace to be "Ready" ...
	E0109 00:09:36.230576  451984 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-845373" hosting pod "coredns-5dd5756b68-vkd62" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-845373" has status "Ready":"False"
	I0109 00:09:36.230600  451984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:32.754128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779281  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:32.779328  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:32.754425  453241 retry.go:31] will retry after 781.872506ms: waiting for machine to come up
	I0109 00:09:33.538136  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:33.538643  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:33.538562  453241 retry.go:31] will retry after 1.315575893s: waiting for machine to come up
	I0109 00:09:34.856083  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857209  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:34.857287  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:34.857007  453241 retry.go:31] will retry after 1.252692701s: waiting for machine to come up
	I0109 00:09:36.111647  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112092  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:36.112127  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:36.112042  453241 retry.go:31] will retry after 1.549931798s: waiting for machine to come up
	I0109 00:09:37.664325  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664771  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:37.664841  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:37.664729  453241 retry.go:31] will retry after 2.220936863s: waiting for machine to come up
	I0109 00:09:39.585741  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.435146297s)
	I0109 00:09:39.585853  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0109 00:09:39.585890  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:39.585954  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0109 00:09:38.239319  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:40.240459  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:39.886897  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887409  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:39.887446  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:39.887322  453241 retry.go:31] will retry after 3.125817684s: waiting for machine to come up
	I0109 00:09:42.688186  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.102196226s)
	I0109 00:09:42.688238  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0109 00:09:42.688270  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:42.688333  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0109 00:09:44.144243  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.455874893s)
	I0109 00:09:44.144277  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0109 00:09:44.144322  452237 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:44.144396  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0109 00:09:45.193429  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.048998334s)
	I0109 00:09:45.193464  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0109 00:09:45.193501  452237 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:45.193553  452237 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0109 00:09:42.241597  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:44.740359  451984 pod_ready.go:102] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:46.239061  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.239098  451984 pod_ready.go:81] duration metric: took 10.008483597s waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.239112  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244571  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.244598  451984 pod_ready.go:81] duration metric: took 5.476365ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.244610  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249839  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.249866  451984 pod_ready.go:81] duration metric: took 5.248385ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.249891  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254718  451984 pod_ready.go:92] pod "kube-proxy-tbzpb" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:46.254742  451984 pod_ready.go:81] duration metric: took 4.843779ms waiting for pod "kube-proxy-tbzpb" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:46.254752  451984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:43.016904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:43.017479  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:43.017386  453241 retry.go:31] will retry after 3.976875386s: waiting for machine to come up
	I0109 00:09:46.996452  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996902  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | unable to find current IP address of domain default-k8s-diff-port-834116 in network mk-default-k8s-diff-port-834116
	I0109 00:09:46.996937  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | I0109 00:09:46.996855  453241 retry.go:31] will retry after 5.149738116s: waiting for machine to come up
	I0109 00:09:47.750708  452237 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.557124662s)
	I0109 00:09:47.750737  452237 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0109 00:09:47.750767  452237 cache_images.go:123] Successfully loaded all cached images
	I0109 00:09:47.750773  452237 cache_images.go:92] LoadImages completed in 17.715956149s
	I0109 00:09:47.750871  452237 ssh_runner.go:195] Run: crio config
	I0109 00:09:47.811486  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:09:47.811510  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:09:47.811535  452237 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:09:47.811560  452237 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.62 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-378213 NodeName:no-preload-378213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:09:47.811757  452237 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-378213"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:09:47.811881  452237 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-378213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:09:47.811954  452237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:09:47.821353  452237 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:09:47.821426  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:09:47.830117  452237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0109 00:09:47.847966  452237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:09:47.865130  452237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0109 00:09:47.881920  452237 ssh_runner.go:195] Run: grep 192.168.61.62	control-plane.minikube.internal$ /etc/hosts
	I0109 00:09:47.885907  452237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:47.899472  452237 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213 for IP: 192.168.61.62
	I0109 00:09:47.899519  452237 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:09:47.899687  452237 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:09:47.899729  452237 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:09:47.899792  452237 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/client.key
	I0109 00:09:47.899854  452237 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key.fe752756
	I0109 00:09:47.899891  452237 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key
	I0109 00:09:47.899991  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:09:47.900022  452237 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:09:47.900033  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:09:47.900056  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:09:47.900084  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:09:47.900111  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:09:47.900176  452237 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:47.900831  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:09:47.926702  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:09:47.952472  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:09:47.977143  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/no-preload-378213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:09:48.001909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:09:48.028506  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:09:48.054909  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:09:48.079320  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:09:48.106719  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:09:48.133440  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:09:48.157353  452237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:09:48.180860  452237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:09:48.198490  452237 ssh_runner.go:195] Run: openssl version
	I0109 00:09:48.204240  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:09:48.214015  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218654  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.218717  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:09:48.224372  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:09:48.233922  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:09:48.243425  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248305  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.248381  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:09:48.254018  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:09:48.263791  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:09:48.273568  452237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278373  452237 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.278438  452237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:09:48.284003  452237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:09:48.296358  452237 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:09:48.301336  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:09:48.307645  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:09:48.313470  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:09:48.319349  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:09:48.325344  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:09:48.331352  452237 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:09:48.337159  452237 kubeadm.go:404] StartCluster: {Name:no-preload-378213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-378213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:09:48.337255  452237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:09:48.337302  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:48.374150  452237 cri.go:89] found id: ""
	I0109 00:09:48.374229  452237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:09:48.383627  452237 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:09:48.383649  452237 kubeadm.go:636] restartCluster start
	I0109 00:09:48.383699  452237 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:09:48.392428  452237 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.393515  452237 kubeconfig.go:92] found "no-preload-378213" server: "https://192.168.61.62:8443"
	I0109 00:09:48.395997  452237 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:09:48.404639  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.404708  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.416205  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.904794  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:48.904896  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:48.916391  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.404903  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.405006  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.416469  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:49.905053  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:49.905224  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:49.916621  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:48.262991  451984 pod_ready.go:102] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:50.262235  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:09:50.262262  451984 pod_ready.go:81] duration metric: took 4.007503301s waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:50.262275  451984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	I0109 00:09:52.150891  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151383  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Found IP for machine: 192.168.39.73
	I0109 00:09:52.151416  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserving static IP address...
	I0109 00:09:52.151442  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has current primary IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.151904  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.151943  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | skip adding static IP to network mk-default-k8s-diff-port-834116 - found existing host DHCP lease matching {name: "default-k8s-diff-port-834116", mac: "52:54:00:13:e8:ec", ip: "192.168.39.73"}
	I0109 00:09:52.151966  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Reserved static IP address: 192.168.39.73
	I0109 00:09:52.152005  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Waiting for SSH to be available...
	I0109 00:09:52.152039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Getting to WaitForSSH function...
	I0109 00:09:52.154139  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154471  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.154514  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.154642  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH client type: external
	I0109 00:09:52.154672  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa (-rw-------)
	I0109 00:09:52.154701  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:09:52.154719  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | About to run SSH command:
	I0109 00:09:52.154736  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | exit 0
	I0109 00:09:52.247320  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | SSH cmd err, output: <nil>: 
	I0109 00:09:52.247704  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetConfigRaw
	I0109 00:09:52.248366  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.251047  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251482  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.251511  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.251734  452488 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/config.json ...
	I0109 00:09:52.251981  452488 machine.go:88] provisioning docker machine ...
	I0109 00:09:52.252003  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:52.252219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252396  452488 buildroot.go:166] provisioning hostname "default-k8s-diff-port-834116"
	I0109 00:09:52.252418  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.252612  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.254861  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255244  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.255276  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.255485  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.255657  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.255956  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.256111  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.256468  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.256485  452488 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-834116 && echo "default-k8s-diff-port-834116" | sudo tee /etc/hostname
	I0109 00:09:52.392092  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-834116
	
	I0109 00:09:52.392128  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.394807  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395260  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.395312  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.395539  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.395797  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396091  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.396289  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.396464  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:52.396839  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:52.396863  452488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-834116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-834116/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-834116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:09:52.527950  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:09:52.527981  452488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:09:52.528006  452488 buildroot.go:174] setting up certificates
	I0109 00:09:52.528021  452488 provision.go:83] configureAuth start
	I0109 00:09:52.528033  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetMachineName
	I0109 00:09:52.528365  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:52.531179  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531597  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.531624  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.531763  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.534073  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534480  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.534521  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.534650  452488 provision.go:138] copyHostCerts
	I0109 00:09:52.534726  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:09:52.534737  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:09:52.534796  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:09:52.534902  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:09:52.534912  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:09:52.534933  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:09:52.535020  452488 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:09:52.535027  452488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:09:52.535042  452488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:09:52.535093  452488 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-834116 san=[192.168.39.73 192.168.39.73 localhost 127.0.0.1 minikube default-k8s-diff-port-834116]
	I0109 00:09:53.636158  451943 start.go:369] acquired machines lock for "old-k8s-version-003293" in 1m0.185697203s
	I0109 00:09:53.636214  451943 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:09:53.636222  451943 fix.go:54] fixHost starting: 
	I0109 00:09:53.636646  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:09:53.636682  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:09:53.654194  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0109 00:09:53.654606  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:09:53.655203  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:09:53.655227  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:09:53.655659  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:09:53.655927  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:09:53.656139  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:09:53.657909  451943 fix.go:102] recreateIfNeeded on old-k8s-version-003293: state=Stopped err=<nil>
	I0109 00:09:53.657934  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	W0109 00:09:53.658135  451943 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:09:53.660261  451943 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-003293" ...
	I0109 00:09:52.872029  452488 provision.go:172] copyRemoteCerts
	I0109 00:09:52.872106  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:09:52.872134  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:52.874824  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875218  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:52.875256  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:52.875469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:52.875726  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:52.875959  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:52.876122  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:52.970940  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:09:52.995353  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0109 00:09:53.019846  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:09:53.048132  452488 provision.go:86] duration metric: configureAuth took 520.096734ms
	I0109 00:09:53.048166  452488 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:09:53.048357  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:09:53.048458  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.051336  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051745  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.051781  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.051963  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.052200  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.052578  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.052753  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.053273  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.053296  452488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:09:53.371482  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:09:53.371519  452488 machine.go:91] provisioned docker machine in 1.119521349s
	I0109 00:09:53.371534  452488 start.go:300] post-start starting for "default-k8s-diff-port-834116" (driver="kvm2")
	I0109 00:09:53.371572  452488 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:09:53.371601  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.371940  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:09:53.371968  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.374606  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.374999  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.375039  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.375242  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.375487  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.375668  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.375823  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.469684  452488 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:09:53.474184  452488 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:09:53.474226  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:09:53.474291  452488 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:09:53.474375  452488 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:09:53.474510  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:09:53.484106  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:09:53.508477  452488 start.go:303] post-start completed in 136.921252ms
	I0109 00:09:53.508516  452488 fix.go:56] fixHost completed within 25.099889324s
	I0109 00:09:53.508540  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.511508  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.511954  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.511993  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.512174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.512412  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512605  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.512739  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.512966  452488 main.go:141] libmachine: Using SSH client type: native
	I0109 00:09:53.513304  452488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0109 00:09:53.513319  452488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:09:53.635969  452488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704758993.581588382
	
	I0109 00:09:53.635992  452488 fix.go:206] guest clock: 1704758993.581588382
	I0109 00:09:53.636001  452488 fix.go:219] Guest: 2024-01-09 00:09:53.581588382 +0000 UTC Remote: 2024-01-09 00:09:53.508520878 +0000 UTC m=+265.847432935 (delta=73.067504ms)
	I0109 00:09:53.636037  452488 fix.go:190] guest clock delta is within tolerance: 73.067504ms
	I0109 00:09:53.636042  452488 start.go:83] releasing machines lock for "default-k8s-diff-port-834116", held for 25.227459425s
	I0109 00:09:53.636078  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.636408  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:53.639469  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.639957  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.639990  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.640149  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.640967  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:09:53.641079  452488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:09:53.641126  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.641236  452488 ssh_runner.go:195] Run: cat /version.json
	I0109 00:09:53.641263  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:09:53.643872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644145  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644230  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644258  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644427  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644519  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:53.644552  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:53.644618  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644698  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:09:53.644784  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.644850  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:09:53.644945  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.645012  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:09:53.645188  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:09:53.758973  452488 ssh_runner.go:195] Run: systemctl --version
	I0109 00:09:53.765494  452488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:09:53.913457  452488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:09:53.921317  452488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:09:53.921409  452488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:09:53.937393  452488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:09:53.937422  452488 start.go:475] detecting cgroup driver to use...
	I0109 00:09:53.937501  452488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:09:53.954986  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:09:53.967577  452488 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:09:53.967661  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:09:53.981370  452488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:09:53.994954  452488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:09:54.113662  452488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:09:54.257917  452488 docker.go:219] disabling docker service ...
	I0109 00:09:54.258009  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:09:54.275330  452488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:09:54.287545  452488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:09:54.413696  452488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:09:54.534759  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:09:54.548789  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:09:54.567131  452488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:09:54.567209  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.578605  452488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:09:54.578690  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.588764  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.598290  452488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:09:54.608187  452488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:09:54.619339  452488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:09:54.627744  452488 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:09:54.627810  452488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:09:54.640572  452488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:09:54.649169  452488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:09:54.774028  452488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:09:54.981035  452488 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:09:54.981123  452488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:09:54.986812  452488 start.go:543] Will wait 60s for crictl version
	I0109 00:09:54.986874  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:09:54.991067  452488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:09:55.026881  452488 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:09:55.026988  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.084315  452488 ssh_runner.go:195] Run: crio --version
	I0109 00:09:55.135003  452488 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0109 00:09:50.405359  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.405454  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.417541  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:50.904703  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:50.904809  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:50.916106  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.404732  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.404823  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.418697  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:51.905352  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:51.905439  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:51.917655  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.404773  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.404858  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.417345  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:52.905434  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:52.905529  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:52.916604  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.404704  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.404820  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.416990  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.905624  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:53.905727  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:53.918455  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.404944  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.405034  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.419015  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:54.905601  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:54.905738  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:54.921252  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:53.661730  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Start
	I0109 00:09:53.661977  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring networks are active...
	I0109 00:09:53.662718  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network default is active
	I0109 00:09:53.663173  451943 main.go:141] libmachine: (old-k8s-version-003293) Ensuring network mk-old-k8s-version-003293 is active
	I0109 00:09:53.663701  451943 main.go:141] libmachine: (old-k8s-version-003293) Getting domain xml...
	I0109 00:09:53.664456  451943 main.go:141] libmachine: (old-k8s-version-003293) Creating domain...
	I0109 00:09:55.030325  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting to get IP...
	I0109 00:09:55.031241  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.031720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.031800  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.031693  453422 retry.go:31] will retry after 209.915867ms: waiting for machine to come up
	I0109 00:09:55.243218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.243740  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.243792  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.243678  453422 retry.go:31] will retry after 309.964884ms: waiting for machine to come up
	I0109 00:09:55.555468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.556044  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.556075  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.555982  453422 retry.go:31] will retry after 306.870224ms: waiting for machine to come up
	I0109 00:09:55.864558  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:55.865161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:55.865199  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:55.865113  453422 retry.go:31] will retry after 475.599739ms: waiting for machine to come up
	I0109 00:09:52.270751  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:54.271341  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:56.775574  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:09:55.136380  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetIP
	I0109 00:09:55.139749  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140142  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:09:55.140174  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:09:55.140387  452488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0109 00:09:55.145715  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:09:55.159881  452488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:09:55.159972  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:09:55.209715  452488 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0109 00:09:55.209814  452488 ssh_runner.go:195] Run: which lz4
	I0109 00:09:55.214766  452488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:09:55.219645  452488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:09:55.219683  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0109 00:09:57.101116  452488 crio.go:444] Took 1.886420 seconds to copy over tarball
	I0109 00:09:57.101207  452488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:09:55.405633  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.405734  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.420242  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:55.905578  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:55.905685  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:55.923018  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.405516  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.405602  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.420028  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:56.905320  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:56.905409  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:56.940464  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.404810  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.404925  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.420965  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:57.905566  452237 api_server.go:166] Checking apiserver status ...
	I0109 00:09:57.905684  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:09:57.920601  452237 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:09:58.404728  452237 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:09:58.404779  452237 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:09:58.404821  452237 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:09:58.404906  452237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:09:58.450415  452237 cri.go:89] found id: ""
	I0109 00:09:58.450510  452237 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:09:58.469938  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:09:58.481877  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:09:58.481963  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494336  452237 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:09:58.494367  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:58.644325  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.472346  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.715956  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.857573  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:09:59.962996  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:09:59.963097  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:09:56.342815  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.343422  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.343456  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.343365  453422 retry.go:31] will retry after 512.8445ms: waiting for machine to come up
	I0109 00:09:56.858161  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:56.858689  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:56.858720  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:56.858631  453422 retry.go:31] will retry after 649.65221ms: waiting for machine to come up
	I0109 00:09:57.509509  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:57.510080  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:57.510121  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:57.510023  453422 retry.go:31] will retry after 1.153518379s: waiting for machine to come up
	I0109 00:09:58.665328  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:09:58.665946  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:09:58.665986  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:09:58.665886  453422 retry.go:31] will retry after 1.392576392s: waiting for machine to come up
	I0109 00:10:00.060701  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:00.061368  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:00.061416  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:00.061263  453422 retry.go:31] will retry after 1.185250663s: waiting for machine to come up
	I0109 00:09:59.270305  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:01.271958  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:00.887146  452488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.785897124s)
	I0109 00:10:00.887183  452488 crio.go:451] Took 3.786033 seconds to extract the tarball
	I0109 00:10:00.887196  452488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:00.940322  452488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:01.087742  452488 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:10:01.087778  452488 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:10:01.087861  452488 ssh_runner.go:195] Run: crio config
	I0109 00:10:01.154384  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:01.154411  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:01.154432  452488 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:01.154460  452488 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-834116 NodeName:default-k8s-diff-port-834116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:10:01.154664  452488 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-834116"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:01.154768  452488 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-834116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0109 00:10:01.154837  452488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:10:01.165075  452488 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:01.165167  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:01.175380  452488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I0109 00:10:01.198018  452488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:01.216515  452488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I0109 00:10:01.238477  452488 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:01.242706  452488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:01.256799  452488 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116 for IP: 192.168.39.73
	I0109 00:10:01.256833  452488 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:01.257009  452488 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:01.257084  452488 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:01.257180  452488 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/client.key
	I0109 00:10:01.257272  452488 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key.8b49dc8b
	I0109 00:10:01.257330  452488 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key
	I0109 00:10:01.257473  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:01.257512  452488 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:01.257529  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:01.257582  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:01.257632  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:01.257674  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:01.257737  452488 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:01.258699  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:01.288498  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:10:01.315010  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:01.342657  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/default-k8s-diff-port-834116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:10:01.368423  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:01.394295  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:01.423461  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:01.452044  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:01.478834  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:01.505029  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:01.531765  452488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:01.557126  452488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:01.575037  452488 ssh_runner.go:195] Run: openssl version
	I0109 00:10:01.580971  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:01.592882  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598205  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.598285  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:01.604293  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:01.615508  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:01.625979  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631195  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.631268  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:01.637322  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:01.649611  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:01.661754  452488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667033  452488 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.667114  452488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:01.673312  452488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:01.687649  452488 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:01.694523  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:01.701260  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:01.709371  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:01.717249  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:01.724104  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:01.730706  452488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:01.738716  452488 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-834116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-834116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:01.738846  452488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:01.738935  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:01.789522  452488 cri.go:89] found id: ""
	I0109 00:10:01.789639  452488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:01.802440  452488 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:01.802470  452488 kubeadm.go:636] restartCluster start
	I0109 00:10:01.802530  452488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:01.814839  452488 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:01.816303  452488 kubeconfig.go:92] found "default-k8s-diff-port-834116" server: "https://192.168.39.73:8444"
	I0109 00:10:01.818978  452488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:01.829115  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:01.829200  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:01.841947  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:02.329489  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.329629  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.346716  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:00.463974  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:00.963295  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.463906  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:01.963508  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.463259  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:02.964275  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.464037  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.963542  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:03.998344  452237 api_server.go:72] duration metric: took 4.035357514s to wait for apiserver process to appear ...
	I0109 00:10:03.998383  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:03.998415  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:03.999025  452237 api_server.go:269] stopped: https://192.168.61.62:8443/healthz: Get "https://192.168.61.62:8443/healthz": dial tcp 192.168.61.62:8443: connect: connection refused
	I0109 00:10:04.498619  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:01.248726  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:01.249297  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:01.249334  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:01.249190  453422 retry.go:31] will retry after 2.101995832s: waiting for machine to come up
	I0109 00:10:03.353250  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:03.353837  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:03.353870  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:03.353803  453422 retry.go:31] will retry after 2.338357499s: waiting for machine to come up
	I0109 00:10:05.694257  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:05.694773  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:05.694805  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:05.694753  453422 retry.go:31] will retry after 2.962877462s: waiting for machine to come up
	I0109 00:10:03.772407  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:05.776569  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:02.829349  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:02.829477  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:02.845294  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.329917  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.330034  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.345877  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:03.829787  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:03.829908  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:03.845499  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.329869  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.329968  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.345228  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:04.829841  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:04.829964  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:04.841831  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.329392  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.329534  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.344928  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:05.829388  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:05.829490  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:05.845517  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.329745  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.329846  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.344692  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:06.829201  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:06.829339  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:06.844107  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.329562  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.329679  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.341888  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:07.617974  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.618015  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.618037  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:07.676283  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:07.676318  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:07.999237  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.036271  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.036307  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.498881  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:08.504457  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:08.504490  452237 api_server.go:103] status: https://192.168.61.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:08.998535  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:10:09.009194  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:10:09.017267  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:10:09.017300  452237 api_server.go:131] duration metric: took 5.018909056s to wait for apiserver health ...
	I0109 00:10:09.017311  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:10:09.017319  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:09.019322  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:09.020666  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:09.030282  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:09.049477  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:09.063218  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:09.063264  452237 system_pods.go:61] "coredns-76f75df574-kw4v7" [6a2a3896-7b4c-4912-9e6a-0033564d211b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:09.063277  452237 system_pods.go:61] "etcd-no-preload-378213" [b650412b-fa3a-4490-9b43-caf6ac1cb8b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:09.063294  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [b372f056-7243-416e-905f-ba80a332005a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:09.063307  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8b32fab5-ef2b-4145-8cf8-8ec616a73798] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:09.063317  452237 system_pods.go:61] "kube-proxy-kxjqj" [40d27586-c2e4-407e-ac43-c0dbd851427e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:09.063325  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [2a609b1f-ce89-4e95-b56c-c84702352967] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:09.063343  452237 system_pods.go:61] "metrics-server-57f55c9bc5-th24j" [9f47b0d1-1399-4349-8f99-d85598461c68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:09.063383  452237 system_pods.go:61] "storage-provisioner" [f12f48e3-4e11-47e4-b785-ca9b47cbc0a4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:09.063396  452237 system_pods.go:74] duration metric: took 13.893709ms to wait for pod list to return data ...
	I0109 00:10:09.063407  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:09.067414  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:09.067457  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:09.067474  452237 node_conditions.go:105] duration metric: took 4.056143ms to run NodePressure ...
	I0109 00:10:09.067507  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:09.383666  452237 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389727  452237 kubeadm.go:787] kubelet initialised
	I0109 00:10:09.389749  452237 kubeadm.go:788] duration metric: took 6.05357ms waiting for restarted kubelet to initialise ...
	I0109 00:10:09.389758  452237 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:09.397162  452237 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:08.658880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:08.659431  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | unable to find current IP address of domain old-k8s-version-003293 in network mk-old-k8s-version-003293
	I0109 00:10:08.659468  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | I0109 00:10:08.659353  453422 retry.go:31] will retry after 4.088487909s: waiting for machine to come up
	I0109 00:10:08.271546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:10.273183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:07.830081  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:07.830237  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:07.846118  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.329537  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.329642  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.345267  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:08.829229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:08.829351  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:08.845147  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.329244  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.329371  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.343552  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:09.829910  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:09.829999  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:09.841589  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.330229  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.330316  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.346027  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:10.830077  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:10.830193  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:10.842301  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.329908  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.330029  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.341398  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.829904  452488 api_server.go:166] Checking apiserver status ...
	I0109 00:10:11.830007  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:11.841281  452488 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:11.841317  452488 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:11.841340  452488 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:11.841350  452488 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:11.841406  452488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:11.880872  452488 cri.go:89] found id: ""
	I0109 00:10:11.880993  452488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:11.896522  452488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:11.905372  452488 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:11.905452  452488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915053  452488 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:11.915083  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:12.053489  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:11.406042  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:13.406387  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.752603  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753243  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has current primary IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.753276  451943 main.go:141] libmachine: (old-k8s-version-003293) Found IP for machine: 192.168.72.81
	I0109 00:10:12.753290  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserving static IP address...
	I0109 00:10:12.753738  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.753770  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | skip adding static IP to network mk-old-k8s-version-003293 - found existing host DHCP lease matching {name: "old-k8s-version-003293", mac: "52:54:00:38:0e:b5", ip: "192.168.72.81"}
	I0109 00:10:12.753790  451943 main.go:141] libmachine: (old-k8s-version-003293) Reserved static IP address: 192.168.72.81
	I0109 00:10:12.753812  451943 main.go:141] libmachine: (old-k8s-version-003293) Waiting for SSH to be available...
	I0109 00:10:12.753829  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Getting to WaitForSSH function...
	I0109 00:10:12.756348  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756765  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.756798  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.756931  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH client type: external
	I0109 00:10:12.756959  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa (-rw-------)
	I0109 00:10:12.756995  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:10:12.757008  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | About to run SSH command:
	I0109 00:10:12.757025  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | exit 0
	I0109 00:10:12.908563  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | SSH cmd err, output: <nil>: 
	I0109 00:10:12.909330  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetConfigRaw
	I0109 00:10:12.910245  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:12.913338  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.913744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.913778  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.914153  451943 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/config.json ...
	I0109 00:10:12.914422  451943 machine.go:88] provisioning docker machine ...
	I0109 00:10:12.914451  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:12.914678  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.914869  451943 buildroot.go:166] provisioning hostname "old-k8s-version-003293"
	I0109 00:10:12.914895  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:12.915042  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:12.917551  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.917918  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:12.917949  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:12.918083  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:12.918284  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918477  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:12.918637  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:12.918824  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:12.919390  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:12.919409  451943 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-003293 && echo "old-k8s-version-003293" | sudo tee /etc/hostname
	I0109 00:10:13.077570  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-003293
	
	I0109 00:10:13.077613  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.081190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081575  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.081599  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.081874  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.082128  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082377  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.082568  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.082783  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.083268  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.083293  451943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-003293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-003293/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-003293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:10:13.235134  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:10:13.235167  451943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:10:13.235216  451943 buildroot.go:174] setting up certificates
	I0109 00:10:13.235236  451943 provision.go:83] configureAuth start
	I0109 00:10:13.235254  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetMachineName
	I0109 00:10:13.235632  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:13.239282  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.239867  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.239902  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.240253  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.243109  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243516  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.243546  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.243730  451943 provision.go:138] copyHostCerts
	I0109 00:10:13.243811  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:10:13.243826  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:10:13.243917  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:10:13.244095  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:10:13.244109  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:10:13.244139  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:10:13.244233  451943 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:10:13.244244  451943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:10:13.244271  451943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:10:13.244357  451943 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-003293 san=[192.168.72.81 192.168.72.81 localhost 127.0.0.1 minikube old-k8s-version-003293]
	I0109 00:10:13.358229  451943 provision.go:172] copyRemoteCerts
	I0109 00:10:13.358298  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:10:13.358329  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.361495  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.361925  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.361961  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.362229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.362512  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.362707  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.362901  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:13.464633  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:10:13.491908  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:10:13.520424  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:10:13.551287  451943 provision.go:86] duration metric: configureAuth took 316.030603ms
	I0109 00:10:13.551322  451943 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:10:13.551588  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:10:13.551689  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.554570  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.554888  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.554941  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.555088  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.555402  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555595  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.555803  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.555991  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:13.556435  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:13.556461  451943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:10:13.929994  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:10:13.930040  451943 machine.go:91] provisioned docker machine in 1.015597473s
	I0109 00:10:13.930056  451943 start.go:300] post-start starting for "old-k8s-version-003293" (driver="kvm2")
	I0109 00:10:13.930076  451943 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:10:13.930107  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:13.930498  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:10:13.930537  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:13.933680  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934172  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:13.934218  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:13.934589  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:13.934794  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:13.935029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:13.935189  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.038045  451943 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:10:14.044182  451943 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:10:14.044220  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:10:14.044315  451943 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:10:14.044455  451943 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:10:14.044602  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:10:14.056820  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:14.083704  451943 start.go:303] post-start completed in 153.628012ms
	I0109 00:10:14.083736  451943 fix.go:56] fixHost completed within 20.447514213s
	I0109 00:10:14.083765  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.087190  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087732  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.087776  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.087968  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.088229  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088467  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.088630  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.088863  451943 main.go:141] libmachine: Using SSH client type: native
	I0109 00:10:14.089367  451943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.81 22 <nil> <nil>}
	I0109 00:10:14.089389  451943 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:10:14.224545  451943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704759014.163550757
	
	I0109 00:10:14.224580  451943 fix.go:206] guest clock: 1704759014.163550757
	I0109 00:10:14.224591  451943 fix.go:219] Guest: 2024-01-09 00:10:14.163550757 +0000 UTC Remote: 2024-01-09 00:10:14.083740733 +0000 UTC m=+363.223126670 (delta=79.810024ms)
	I0109 00:10:14.224620  451943 fix.go:190] guest clock delta is within tolerance: 79.810024ms
	I0109 00:10:14.224627  451943 start.go:83] releasing machines lock for "old-k8s-version-003293", held for 20.588443227s
	I0109 00:10:14.224659  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.224961  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:14.228116  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228565  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.228645  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.228870  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229553  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229781  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:10:14.229882  451943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:10:14.229958  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.230034  451943 ssh_runner.go:195] Run: cat /version.json
	I0109 00:10:14.230062  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:10:14.233060  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233305  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233484  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233511  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233691  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.233903  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:14.233926  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:14.233959  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234064  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:10:14.234220  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234290  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:10:14.234400  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.234418  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:10:14.234557  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:10:14.328685  451943 ssh_runner.go:195] Run: systemctl --version
	I0109 00:10:14.359854  451943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:10:14.515121  451943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:10:14.525585  451943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:10:14.525668  451943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:10:14.549678  451943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:10:14.549719  451943 start.go:475] detecting cgroup driver to use...
	I0109 00:10:14.549804  451943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:10:14.569734  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:10:14.587820  451943 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:10:14.587921  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:10:14.601724  451943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:10:14.615402  451943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:10:14.732774  451943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:10:14.872480  451943 docker.go:219] disabling docker service ...
	I0109 00:10:14.872579  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:10:14.887044  451943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:10:14.904944  451943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:10:15.043833  451943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:10:15.162992  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:10:15.176677  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:10:15.197594  451943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0109 00:10:15.197674  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.207993  451943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:10:15.208071  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.218230  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.228291  451943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:10:15.238163  451943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:10:15.248394  451943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:10:15.257457  451943 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:10:15.257541  451943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:10:15.271604  451943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:10:15.282409  451943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:10:15.401506  451943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:10:15.586851  451943 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:10:15.586942  451943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:10:15.593734  451943 start.go:543] Will wait 60s for crictl version
	I0109 00:10:15.593798  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:15.598705  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:10:15.642640  451943 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:10:15.642751  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.714964  451943 ssh_runner.go:195] Run: crio --version
	I0109 00:10:15.773793  451943 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0109 00:10:15.775287  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetIP
	I0109 00:10:15.778313  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.778769  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:10:15.778795  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:10:15.779046  451943 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:10:15.783496  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:15.795338  451943 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0109 00:10:15.795427  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:15.844077  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:15.844162  451943 ssh_runner.go:195] Run: which lz4
	I0109 00:10:15.848502  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:10:15.852893  451943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:10:15.852949  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0109 00:10:12.274183  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:14.770967  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:16.781482  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:12.786247  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.017442  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.128701  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:13.223775  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:13.223873  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:13.724895  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.224593  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:14.724375  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.224993  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.724059  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:15.747019  452488 api_server.go:72] duration metric: took 2.523230788s to wait for apiserver process to appear ...
	I0109 00:10:15.747056  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:15.747083  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.747711  452488 api_server.go:269] stopped: https://192.168.39.73:8444/healthz: Get "https://192.168.39.73:8444/healthz": dial tcp 192.168.39.73:8444: connect: connection refused
	I0109 00:10:16.247411  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:15.407079  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.407307  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:19.407533  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:17.632956  451943 crio.go:444] Took 1.784489 seconds to copy over tarball
	I0109 00:10:17.633087  451943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:10:19.999506  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:19.999551  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:19.999569  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.066949  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:20.066982  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:20.247460  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.256943  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.256985  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:20.747576  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:20.755833  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0109 00:10:20.755892  452488 api_server.go:103] status: https://192.168.39.73:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0109 00:10:21.247473  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:10:21.255476  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:10:21.266074  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:10:21.266115  452488 api_server.go:131] duration metric: took 5.519049271s to wait for apiserver health ...
	I0109 00:10:21.266127  452488 cni.go:84] Creating CNI manager for ""
	I0109 00:10:21.266136  452488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:21.401812  452488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:19.272981  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.770765  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.903126  452488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:21.921050  452488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:21.946628  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:21.959029  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:21.959077  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:10:21.959089  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:10:21.959100  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:10:21.959110  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:10:21.959125  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:10:21.959141  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:10:21.959149  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:10:21.959165  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:10:21.959178  452488 system_pods.go:74] duration metric: took 12.524667ms to wait for pod list to return data ...
	I0109 00:10:21.959198  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:21.963572  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:21.963614  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:21.963629  452488 node_conditions.go:105] duration metric: took 4.420685ms to run NodePressure ...
	I0109 00:10:21.963653  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:23.566660  452488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.602978271s)
	I0109 00:10:23.566704  452488 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573882  452488 kubeadm.go:787] kubelet initialised
	I0109 00:10:23.573911  452488 kubeadm.go:788] duration metric: took 7.19484ms waiting for restarted kubelet to initialise ...
	I0109 00:10:23.573923  452488 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:23.590206  452488 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.603347  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603402  452488 pod_ready.go:81] duration metric: took 13.169776ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.603416  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.603426  452488 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.614946  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.614986  452488 pod_ready.go:81] duration metric: took 11.548332ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.615003  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.615012  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.628345  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628378  452488 pod_ready.go:81] duration metric: took 13.353873ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.628389  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.628396  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.635987  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636023  452488 pod_ready.go:81] duration metric: took 7.619372ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.636043  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.636072  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:23.972993  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973028  452488 pod_ready.go:81] duration metric: took 336.946722ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:23.973040  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-proxy-p9dmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:23.973046  452488 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.371951  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.371991  452488 pod_ready.go:81] duration metric: took 398.932785ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.372016  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.372026  452488 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:24.775778  452488 pod_ready.go:97] node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775825  452488 pod_ready.go:81] duration metric: took 403.787436ms waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:10:24.775842  452488 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-834116" hosting pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:24.775867  452488 pod_ready.go:38] duration metric: took 1.201917208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:24.775895  452488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:10:24.793136  452488 ops.go:34] apiserver oom_adj: -16
	I0109 00:10:24.793169  452488 kubeadm.go:640] restartCluster took 22.990690796s
	I0109 00:10:24.793182  452488 kubeadm.go:406] StartCluster complete in 23.05448254s
	I0109 00:10:24.793207  452488 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.793302  452488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:10:24.795707  452488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:24.796107  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:10:24.796368  452488 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:10:24.796346  452488 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:10:24.796413  452488 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796432  452488 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.796457  452488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-834116"
	I0109 00:10:24.796466  452488 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.796477  452488 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:10:24.796560  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796982  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.796998  452488 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-834116"
	I0109 00:10:24.797017  452488 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:24.797020  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0109 00:10:24.797025  452488 addons.go:246] addon metrics-server should already be in state true
	I0109 00:10:24.797083  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.796987  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797296  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.797477  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.797513  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.803857  452488 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-834116" context rescaled to 1 replicas
	I0109 00:10:24.803958  452488 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:10:24.806278  452488 out.go:177] * Verifying Kubernetes components...
	I0109 00:10:24.807850  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:10:24.817319  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I0109 00:10:24.817600  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0109 00:10:24.817766  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818023  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.818247  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818270  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.818697  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.818899  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.818913  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0109 00:10:24.818937  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.819412  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.819459  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.823502  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.823611  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.824834  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.824859  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.824880  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.825291  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.826131  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.826160  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.829056  452488 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-834116"
	W0109 00:10:24.829115  452488 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:10:24.829158  452488 host.go:66] Checking if "default-k8s-diff-port-834116" exists ...
	I0109 00:10:24.829610  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.829968  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.839969  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0109 00:10:24.840508  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.841140  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.841167  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.841542  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.841864  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.843844  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.846088  452488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:24.844882  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0109 00:10:24.848051  452488 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:24.848069  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:10:24.848093  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.848445  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.849053  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.849074  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.849484  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.849550  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37847
	I0109 00:10:24.849671  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.851401  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.851914  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.851961  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.853938  452488 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:10:22.516402  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:24.907337  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:21.059397  451943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.42624365s)
	I0109 00:10:21.059430  451943 crio.go:451] Took 3.426440 seconds to extract the tarball
	I0109 00:10:21.059441  451943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:10:21.109544  451943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:10:21.177321  451943 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0109 00:10:21.177353  451943 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:10:21.177408  451943 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.177455  451943 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.177499  451943 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0109 00:10:21.177520  451943 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.177679  451943 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.177728  451943 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.177688  451943 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179256  451943 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.179325  451943 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0109 00:10:21.179257  451943 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.179429  451943 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.179551  451943 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.179599  451943 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.179888  451943 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.180077  451943 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.354975  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0109 00:10:21.363097  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.390461  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.393703  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.423416  451943 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0109 00:10:21.423475  451943 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0109 00:10:21.423523  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.433698  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.446038  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.466118  451943 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0109 00:10:21.466213  451943 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.466351  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.499618  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.516687  451943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:10:21.517553  451943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0109 00:10:21.517576  451943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0109 00:10:21.517608  451943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.517642  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0109 00:10:21.517653  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.517609  451943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.517735  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.543109  451943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0109 00:10:21.543170  451943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.543228  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571015  451943 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0109 00:10:21.571069  451943 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.571122  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.571130  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0109 00:10:21.627517  451943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0109 00:10:21.627573  451943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.627623  451943 ssh_runner.go:195] Run: which crictl
	I0109 00:10:21.730620  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0109 00:10:21.730693  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0109 00:10:21.730751  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0109 00:10:21.730772  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0109 00:10:21.730775  451943 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.730876  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0109 00:10:21.730899  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0109 00:10:21.730965  451943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0109 00:10:21.861219  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0109 00:10:21.861308  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0109 00:10:21.870996  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0109 00:10:21.871033  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0109 00:10:21.871087  451943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0109 00:10:21.871117  451943 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0109 00:10:21.871136  451943 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0109 00:10:21.871176  451943 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0109 00:10:23.431278  451943 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.560066098s)
	I0109 00:10:23.431320  451943 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0109 00:10:23.431403  451943 cache_images.go:92] LoadImages completed in 2.25403413s
	W0109 00:10:23.431502  451943 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-399915/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0109 00:10:23.431630  451943 ssh_runner.go:195] Run: crio config
	I0109 00:10:23.501412  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:23.501437  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:23.501460  451943 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:10:23.501478  451943 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.81 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-003293 NodeName:old-k8s-version-003293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0109 00:10:23.501642  451943 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-003293"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-003293
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.81:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:10:23.501740  451943 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-003293 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:10:23.501815  451943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0109 00:10:23.515496  451943 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:10:23.515613  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:10:23.528701  451943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0109 00:10:23.549023  451943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:10:23.568686  451943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0109 00:10:23.588702  451943 ssh_runner.go:195] Run: grep 192.168.72.81	control-plane.minikube.internal$ /etc/hosts
	I0109 00:10:23.593056  451943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:10:23.609254  451943 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293 for IP: 192.168.72.81
	I0109 00:10:23.609338  451943 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:10:23.609556  451943 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:10:23.609643  451943 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:10:23.609767  451943 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/client.key
	I0109 00:10:23.609842  451943 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key.289ddd16
	I0109 00:10:23.609908  451943 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key
	I0109 00:10:23.610069  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:10:23.610137  451943 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:10:23.610158  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:10:23.610197  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:10:23.610232  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:10:23.610265  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:10:23.610323  451943 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:10:23.611274  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:10:23.637653  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0109 00:10:23.664578  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:10:23.694133  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/old-k8s-version-003293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:10:23.722658  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:10:23.750223  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:10:23.778539  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:10:23.802865  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:10:23.829553  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:10:23.857468  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:10:23.886744  451943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:10:23.913384  451943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:10:23.931928  451943 ssh_runner.go:195] Run: openssl version
	I0109 00:10:23.938105  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:10:23.949750  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955870  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.955954  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:10:23.962486  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:10:23.975292  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:10:23.988504  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.993956  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:10:23.994025  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:10:24.000015  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:10:24.010775  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:10:24.021665  451943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026909  451943 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.026972  451943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:10:24.032957  451943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:10:24.043813  451943 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:10:24.048745  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0109 00:10:24.055015  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0109 00:10:24.061551  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0109 00:10:24.068075  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0109 00:10:24.075942  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0109 00:10:24.081898  451943 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0109 00:10:24.088900  451943 kubeadm.go:404] StartCluster: {Name:old-k8s-version-003293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-003293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:10:24.089008  451943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:10:24.089075  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:24.138907  451943 cri.go:89] found id: ""
	I0109 00:10:24.139089  451943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:10:24.152607  451943 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0109 00:10:24.152636  451943 kubeadm.go:636] restartCluster start
	I0109 00:10:24.152696  451943 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0109 00:10:24.166246  451943 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.167660  451943 kubeconfig.go:92] found "old-k8s-version-003293" server: "https://192.168.72.81:8443"
	I0109 00:10:24.171161  451943 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0109 00:10:24.183456  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.183533  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.197246  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.684537  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:24.684670  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:24.698158  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.184562  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.184662  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.196624  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:25.684258  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:25.684379  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:25.699808  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:24.852491  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.852608  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.852621  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.855293  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.855444  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.855453  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:10:24.855467  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:10:24.855484  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.855664  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.855746  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.855858  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.856036  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.857435  452488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:10:24.857481  452488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:10:24.858678  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859181  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.859219  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.859402  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.859570  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.859724  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.859856  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:24.875791  452488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0109 00:10:24.876275  452488 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:10:24.876817  452488 main.go:141] libmachine: Using API Version  1
	I0109 00:10:24.876856  452488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:10:24.877200  452488 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:10:24.877454  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetState
	I0109 00:10:24.879333  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .DriverName
	I0109 00:10:24.879644  452488 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:24.879661  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:10:24.879677  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHHostname
	I0109 00:10:24.882683  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883182  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:e8:ec", ip: ""} in network mk-default-k8s-diff-port-834116: {Iface:virbr1 ExpiryTime:2024-01-09 01:09:42 +0000 UTC Type:0 Mac:52:54:00:13:e8:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:default-k8s-diff-port-834116 Clientid:01:52:54:00:13:e8:ec}
	I0109 00:10:24.883208  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | domain default-k8s-diff-port-834116 has defined IP address 192.168.39.73 and MAC address 52:54:00:13:e8:ec in network mk-default-k8s-diff-port-834116
	I0109 00:10:24.883504  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHPort
	I0109 00:10:24.883694  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHKeyPath
	I0109 00:10:24.883877  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .GetSSHUsername
	I0109 00:10:24.884070  452488 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/default-k8s-diff-port-834116/id_rsa Username:docker}
	I0109 00:10:25.036727  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:10:25.071034  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:10:25.071059  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:10:25.079722  452488 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:25.079745  452488 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0109 00:10:25.096822  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:10:25.107155  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:10:25.107187  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:10:25.149550  452488 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:25.149576  452488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:10:25.202736  452488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:10:26.696247  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.659482228s)
	I0109 00:10:26.696317  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696334  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696330  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.599464128s)
	I0109 00:10:26.696379  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696398  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696816  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696856  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696855  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.696865  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696874  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696883  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.696899  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.696908  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.696935  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.696945  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.697254  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697306  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697406  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.697461  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.697410  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.712803  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.712835  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.713140  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.713162  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736360  452488 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.533581555s)
	I0109 00:10:26.736408  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736424  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.736780  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.736826  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.736841  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.736852  452488 main.go:141] libmachine: Making call to close driver server
	I0109 00:10:26.736872  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) Calling .Close
	I0109 00:10:26.737154  452488 main.go:141] libmachine: (default-k8s-diff-port-834116) DBG | Closing plugin on server side
	I0109 00:10:26.737190  452488 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:10:26.737205  452488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:10:26.737215  452488 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-834116"
	I0109 00:10:26.739310  452488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:10:23.774928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.270567  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.740691  452488 addons.go:508] enable addons completed in 1.94435105s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:10:27.084669  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:27.404032  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.407712  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:26.184150  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.184272  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.196020  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:26.684603  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:26.684710  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:26.699571  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.184212  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.184309  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.196193  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:27.684572  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:27.684658  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:27.697405  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.183918  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.184043  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.197428  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.684565  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:28.684683  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:28.698124  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.183601  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.183725  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.195941  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:29.683554  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:29.683647  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:29.695548  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.184015  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.184116  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.196332  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:30.684533  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:30.684661  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:30.697315  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:28.771203  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.269907  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:29.584966  452488 node_ready.go:58] node "default-k8s-diff-port-834116" has status "Ready":"False"
	I0109 00:10:30.585616  452488 node_ready.go:49] node "default-k8s-diff-port-834116" has status "Ready":"True"
	I0109 00:10:30.585646  452488 node_ready.go:38] duration metric: took 5.505876157s waiting for node "default-k8s-diff-port-834116" to be "Ready" ...
	I0109 00:10:30.585661  452488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:10:30.593510  452488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602388  452488 pod_ready.go:92] pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.602420  452488 pod_ready.go:81] duration metric: took 8.875538ms waiting for pod "coredns-5dd5756b68-csrwr" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.602438  452488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608316  452488 pod_ready.go:92] pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.608343  452488 pod_ready.go:81] duration metric: took 5.896652ms waiting for pod "etcd-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.608355  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614031  452488 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.614056  452488 pod_ready.go:81] duration metric: took 5.692676ms waiting for pod "kube-apiserver-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.614068  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619101  452488 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.619120  452488 pod_ready.go:81] duration metric: took 5.045637ms waiting for pod "kube-controller-manager-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.619129  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986089  452488 pod_ready.go:92] pod "kube-proxy-p9dmf" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:30.986121  452488 pod_ready.go:81] duration metric: took 366.984678ms waiting for pod "kube-proxy-p9dmf" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:30.986135  452488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385215  452488 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:31.385244  452488 pod_ready.go:81] duration metric: took 399.100168ms waiting for pod "kube-scheduler-default-k8s-diff-port-834116" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.385254  452488 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:31.904561  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.905393  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:31.183976  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.184088  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:31.683769  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:31.683876  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:31.695944  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.184543  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.184631  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.197273  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:32.683504  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:32.683613  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:32.696431  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.183904  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.183981  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.195623  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:33.684295  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:33.684408  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:33.697442  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.184151  451943 api_server.go:166] Checking apiserver status ...
	I0109 00:10:34.184264  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0109 00:10:34.196371  451943 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0109 00:10:34.196409  451943 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0109 00:10:34.196451  451943 kubeadm.go:1135] stopping kube-system containers ...
	I0109 00:10:34.196467  451943 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0109 00:10:34.196558  451943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:10:34.243566  451943 cri.go:89] found id: ""
	I0109 00:10:34.243656  451943 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0109 00:10:34.260912  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:10:34.270763  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:10:34.270859  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280082  451943 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0109 00:10:34.280114  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:34.411011  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.279804  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.503377  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.616758  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:35.707051  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:10:35.707153  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:33.771119  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.271823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:33.399336  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.893942  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:35.905685  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:38.408847  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:36.207669  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:36.708189  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.207300  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:10:37.259562  451943 api_server.go:72] duration metric: took 1.552509336s to wait for apiserver process to appear ...
	I0109 00:10:37.259602  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:10:37.259628  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:38.272478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.272571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:37.894659  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:40.393328  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.393530  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:42.260559  451943 api_server.go:269] stopped: https://192.168.72.81:8443/healthz: Get "https://192.168.72.81:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0109 00:10:42.260609  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.136163  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.136216  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.136236  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.196804  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0109 00:10:43.196846  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0109 00:10:43.260001  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.270495  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.270549  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:43.759989  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:43.813746  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:43.813787  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.260614  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.271111  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0109 00:10:44.271144  451943 api_server.go:103] status: https://192.168.72.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0109 00:10:44.760496  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:10:44.771584  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:10:44.780881  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:10:44.780911  451943 api_server.go:131] duration metric: took 7.521300216s to wait for apiserver health ...
	I0109 00:10:44.780923  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:10:44.780933  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:10:44.783223  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:10:40.906182  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:43.407169  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.784832  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:10:44.802495  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:10:44.821665  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:10:44.832420  451943 system_pods.go:59] 8 kube-system pods found
	I0109 00:10:44.832452  451943 system_pods.go:61] "coredns-5644d7b6d9-5hqlw" [b6d5e87b-e72e-47bb-92b2-afecece262c5] Running
	I0109 00:10:44.832456  451943 system_pods.go:61] "coredns-5644d7b6d9-j4nnt" [d8995b4a-0ebf-406b-9937-09ba09591c78] Running
	I0109 00:10:44.832462  451943 system_pods.go:61] "etcd-old-k8s-version-003293" [8b9f9b32-dfe9-4cfe-856b-3aec43645e1e] Running
	I0109 00:10:44.832467  451943 system_pods.go:61] "kube-apiserver-old-k8s-version-003293" [48f5c692-7501-45ae-a53a-49e330129c36] Running
	I0109 00:10:44.832471  451943 system_pods.go:61] "kube-controller-manager-old-k8s-version-003293" [e458a3e9-ae8b-4ab7-bdc5-61b4321cca4a] Running
	I0109 00:10:44.832475  451943 system_pods.go:61] "kube-proxy-bc4tl" [74020495-07c6-441b-9b46-2f6a103d65eb] Running
	I0109 00:10:44.832478  451943 system_pods.go:61] "kube-scheduler-old-k8s-version-003293" [6a8e330c-f4bb-4bfd-b610-9071077fbb0f] Running
	I0109 00:10:44.832482  451943 system_pods.go:61] "storage-provisioner" [cbfd54c3-1952-4c0f-9272-29e2a8a4d5ed] Running
	I0109 00:10:44.832489  451943 system_pods.go:74] duration metric: took 10.801262ms to wait for pod list to return data ...
	I0109 00:10:44.832498  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:10:44.836130  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:10:44.836175  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:10:44.836196  451943 node_conditions.go:105] duration metric: took 3.685161ms to run NodePressure ...
	I0109 00:10:44.836220  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0109 00:10:45.117528  451943 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0109 00:10:45.121965  451943 retry.go:31] will retry after 324.075641ms: kubelet not initialised
	I0109 00:10:45.451702  451943 retry.go:31] will retry after 510.869227ms: kubelet not initialised
	I0109 00:10:42.770145  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.271625  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:44.394539  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:46.894669  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.910325  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:48.406435  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:45.969561  451943 retry.go:31] will retry after 435.571732ms: kubelet not initialised
	I0109 00:10:46.411948  451943 retry.go:31] will retry after 1.046618493s: kubelet not initialised
	I0109 00:10:47.471972  451943 retry.go:31] will retry after 1.328746031s: kubelet not initialised
	I0109 00:10:48.805606  451943 retry.go:31] will retry after 1.964166074s: kubelet not initialised
	I0109 00:10:50.776656  451943 retry.go:31] will retry after 2.966424358s: kubelet not initialised
	I0109 00:10:47.271965  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.773571  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:49.393384  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:51.393857  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:50.905980  452237 pod_ready.go:102] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:52.404441  452237 pod_ready.go:92] pod "coredns-76f75df574-kw4v7" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.404467  452237 pod_ready.go:81] duration metric: took 43.007278698s waiting for pod "coredns-76f75df574-kw4v7" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.404477  452237 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409827  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.409851  452237 pod_ready.go:81] duration metric: took 5.368556ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.409862  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415211  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.415233  452237 pod_ready.go:81] duration metric: took 5.363915ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.415243  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420309  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.420329  452237 pod_ready.go:81] duration metric: took 5.078283ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.420337  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425229  452237 pod_ready.go:92] pod "kube-proxy-kxjqj" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.425251  452237 pod_ready.go:81] duration metric: took 4.908776ms waiting for pod "kube-proxy-kxjqj" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.425260  452237 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.801958  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:10:52.801989  452237 pod_ready.go:81] duration metric: took 376.723222ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:52.802000  452237 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	I0109 00:10:54.811346  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.748552  451943 retry.go:31] will retry after 3.201777002s: kubelet not initialised
	I0109 00:10:52.273938  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:54.771590  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.775438  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:53.422099  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:55.894657  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:57.310528  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:59.313642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:56.956459  451943 retry.go:31] will retry after 6.469663917s: kubelet not initialised
	I0109 00:10:59.272417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.272940  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:10:58.393999  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:00.893766  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:01.809942  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.309972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:03.432087  451943 retry.go:31] will retry after 13.730562228s: kubelet not initialised
	I0109 00:11:03.771273  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.268462  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:02.894171  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:04.894858  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:07.393254  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:06.310613  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.812051  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:08.270554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:10.272757  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:09.893982  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.894729  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:11.310615  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:13.311452  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:12.770003  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.770452  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:14.393106  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:16.394348  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:15.809972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.309870  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:17.168682  451943 retry.go:31] will retry after 14.832819941s: kubelet not initialised
	I0109 00:11:17.271266  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:19.271908  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.771727  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:18.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:21.394025  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:20.808968  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:22.810167  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.773732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:26.269527  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:23.394213  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.893851  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:25.310683  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:27.810354  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:29.814175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.271026  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.271149  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:28.393310  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:30.393582  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.310474  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.312045  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.007072  451943 kubeadm.go:787] kubelet initialised
	I0109 00:11:32.007097  451943 kubeadm.go:788] duration metric: took 46.889534921s waiting for restarted kubelet to initialise ...
	I0109 00:11:32.007109  451943 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:11:32.012969  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018937  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.018957  451943 pod_ready.go:81] duration metric: took 5.963591ms waiting for pod "coredns-5644d7b6d9-5hqlw" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.018975  451943 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028039  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.028067  451943 pod_ready.go:81] duration metric: took 9.084525ms waiting for pod "coredns-5644d7b6d9-j4nnt" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.028078  451943 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032808  451943 pod_ready.go:92] pod "etcd-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.032832  451943 pod_ready.go:81] duration metric: took 4.746043ms waiting for pod "etcd-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.032843  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037435  451943 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.037466  451943 pod_ready.go:81] duration metric: took 4.610014ms waiting for pod "kube-apiserver-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.037478  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405716  451943 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.405742  451943 pod_ready.go:81] duration metric: took 368.257236ms waiting for pod "kube-controller-manager-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.405760  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806721  451943 pod_ready.go:92] pod "kube-proxy-bc4tl" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:32.806747  451943 pod_ready.go:81] duration metric: took 400.981273ms waiting for pod "kube-proxy-bc4tl" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:32.806756  451943 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205810  451943 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace has status "Ready":"True"
	I0109 00:11:33.205840  451943 pod_ready.go:81] duration metric: took 399.074693ms waiting for pod "kube-scheduler-old-k8s-version-003293" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:33.205855  451943 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	I0109 00:11:35.213679  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.271553  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:34.773998  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:32.893079  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:35.393616  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.393839  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:36.809214  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:38.809702  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.714222  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.213748  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:37.270073  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.270564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.771950  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:39.894200  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:41.895632  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:40.810676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:43.310394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:42.214955  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.713236  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.270745  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.769008  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:44.395323  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.893378  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:45.811067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.310292  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:46.713278  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:49.212583  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.769858  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.270380  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:48.894013  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.896386  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:50.311125  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:52.809499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:54.811339  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:51.213641  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.214157  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.711725  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.271867  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.771478  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:53.393541  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:55.894575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.310953  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:59.809359  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:57.713429  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.215472  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.270445  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.770718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:11:58.393555  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:00.892932  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:01.810389  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:04.311994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:02.713532  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.213545  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.270633  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.771349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:03.392243  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:05.393601  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:06.809758  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.310090  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.713345  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.713636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.774207  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:10.271536  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:07.892992  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:09.894465  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.394064  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.310240  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.311902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:11.713857  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:13.714968  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:12.770737  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.271471  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:14.893031  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.393146  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:15.312766  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.808902  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:16.213122  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:18.215771  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.713269  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:17.772762  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.274611  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:19.399686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:21.895279  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:20.315434  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.809703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.813460  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:23.215054  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.216598  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:22.771192  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:25.271732  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:24.392768  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:26.393642  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.309913  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.310558  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.713280  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:29.713388  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:27.771683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.269862  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:28.892939  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:30.894280  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:31.310860  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.313161  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.215375  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.713965  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:32.271111  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:34.770162  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:33.393271  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.393849  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:35.811747  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:38.311158  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.212773  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.712777  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.273180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.274403  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.770772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:37.893508  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:39.893834  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.394002  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:40.311402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:42.809836  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:41.714285  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.213161  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:43.772982  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.269879  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:44.893044  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.894333  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:45.310764  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:47.810622  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:46.213392  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.214029  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.712956  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:48.273388  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.772779  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:49.393068  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:51.894350  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:50.314344  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:52.809208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.809757  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.213473  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.213609  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:53.270014  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:55.270513  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:54.392981  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:56.896752  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.310923  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.809897  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.713409  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:00.213074  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:57.771956  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.772597  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.776736  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:12:59.392477  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.393047  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:01.810055  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.316038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:02.214227  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.714073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:04.271552  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.274081  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:03.394211  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:05.892722  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:06.808153  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.809658  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.213252  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:09.214016  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:08.771514  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.271265  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:07.893535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.394062  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:10.811210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.309480  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:11.713294  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.714070  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:13.274656  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.770363  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:12.892232  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:14.892967  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.893970  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:15.309955  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.310537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.312112  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:16.213649  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:18.712398  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:20.713447  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:17.770504  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.776344  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:19.391934  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.393412  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:21.809067  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.811245  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.715248  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.215489  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:22.270417  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:24.276304  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.771255  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:23.892801  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:26.395553  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:25.815479  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.309581  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:27.713470  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:29.713667  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.772564  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.270216  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:28.892655  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.893557  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:30.310454  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.311950  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.809831  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:31.714418  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:34.213103  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:33.270895  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.772159  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:32.894686  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:35.393366  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.810699  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.315029  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:36.217502  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:38.713073  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.772491  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:40.269651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:37.894503  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:39.895994  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.393607  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.808659  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.809657  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:41.212704  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:43.713415  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:42.270157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.769816  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.770516  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:44.394641  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.895010  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.310425  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.310812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:46.213445  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:48.714493  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:49.270269  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.262625  451984 pod_ready.go:81] duration metric: took 4m0.000332739s waiting for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" ...
	E0109 00:13:50.262665  451984 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2p4js" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:13:50.262695  451984 pod_ready.go:38] duration metric: took 4m14.064299354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:13:50.262735  451984 kubeadm.go:640] restartCluster took 4m35.223413047s
	W0109 00:13:50.262837  451984 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:13:50.262989  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:13:49.394039  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.893287  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:50.809875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.311275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:51.214302  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.215860  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.714407  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:53.893351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.895250  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:55.811061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:57.811763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.213089  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.214795  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:13:58.393252  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:00.394330  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.395864  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:03.952243  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.689217944s)
	I0109 00:14:03.952404  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:03.965852  451984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:14:03.975784  451984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:14:03.984599  451984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:14:03.984649  451984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:14:04.041116  451984 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:14:04.041179  451984 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:14:04.213643  451984 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:14:04.213797  451984 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:14:04.213932  451984 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:14:04.470597  451984 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:14:00.312213  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:02.813799  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.816592  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.472836  451984 out.go:204]   - Generating certificates and keys ...
	I0109 00:14:04.473031  451984 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:14:04.473115  451984 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:14:04.473210  451984 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:14:04.473272  451984 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:14:04.473376  451984 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:14:04.473804  451984 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:14:04.474373  451984 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:14:04.474832  451984 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:14:04.475386  451984 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:14:04.475875  451984 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:14:04.476290  451984 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:14:04.476378  451984 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:14:04.599856  451984 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:14:04.905946  451984 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:14:05.274703  451984 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:14:05.463087  451984 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:14:05.464020  451984 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:14:05.468993  451984 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:14:02.215257  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:04.714764  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:05.471038  451984 out.go:204]   - Booting up control plane ...
	I0109 00:14:05.471146  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:14:05.471245  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:14:05.471342  451984 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:14:05.488208  451984 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:14:05.489177  451984 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:14:05.489282  451984 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:14:05.629700  451984 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:14:04.895593  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.396575  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.310589  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.809734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:07.212902  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.214384  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:09.895351  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:12.397437  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.633863  451984 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004133 seconds
	I0109 00:14:13.634067  451984 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:14:13.657224  451984 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:14:14.196593  451984 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:14:14.196798  451984 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-845373 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:14:14.715124  451984 kubeadm.go:322] [bootstrap-token] Using token: 0z1u86.ex8qfq3o12xtqu87
	I0109 00:14:14.716600  451984 out.go:204]   - Configuring RBAC rules ...
	I0109 00:14:14.716727  451984 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:14:14.724791  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:14:14.734361  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:14:14.742345  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:14:14.749616  451984 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:14:14.753942  451984 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:14:14.774188  451984 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:14:15.042710  451984 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:14:15.131751  451984 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:14:15.132745  451984 kubeadm.go:322] 
	I0109 00:14:15.132804  451984 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:14:15.132810  451984 kubeadm.go:322] 
	I0109 00:14:15.132872  451984 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:14:15.132879  451984 kubeadm.go:322] 
	I0109 00:14:15.132898  451984 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:14:15.132959  451984 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:14:15.133067  451984 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:14:15.133094  451984 kubeadm.go:322] 
	I0109 00:14:15.133160  451984 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:14:15.133173  451984 kubeadm.go:322] 
	I0109 00:14:15.133229  451984 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:14:15.133241  451984 kubeadm.go:322] 
	I0109 00:14:15.133313  451984 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:14:15.133412  451984 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:14:15.133510  451984 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:14:15.133524  451984 kubeadm.go:322] 
	I0109 00:14:15.133644  451984 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:14:15.133761  451984 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:14:15.133777  451984 kubeadm.go:322] 
	I0109 00:14:15.133882  451984 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134003  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:14:15.134030  451984 kubeadm.go:322] 	--control-plane 
	I0109 00:14:15.134037  451984 kubeadm.go:322] 
	I0109 00:14:15.134137  451984 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:14:15.134145  451984 kubeadm.go:322] 
	I0109 00:14:15.134240  451984 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0z1u86.ex8qfq3o12xtqu87 \
	I0109 00:14:15.134415  451984 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:14:15.135483  451984 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:14:15.135524  451984 cni.go:84] Creating CNI manager for ""
	I0109 00:14:15.135536  451984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:14:15.137331  451984 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:14:11.810358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.813252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:11.214971  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:13.713322  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.714895  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:15.138794  451984 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:14:15.164722  451984 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:14:15.236472  451984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:14:15.236536  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.236558  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=embed-certs-845373 minikube.k8s.io/updated_at=2024_01_09T00_14_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.353564  451984 ops.go:34] apiserver oom_adj: -16
	I0109 00:14:15.675801  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.176590  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.676619  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:17.176120  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:14.893438  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.896780  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:16.311939  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.312023  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:18.213002  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.214958  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:17.676614  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.176469  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.676367  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.176646  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.676613  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.176615  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:20.676641  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.176075  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:21.676489  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:22.176784  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:19.395936  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:21.892353  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:20.810687  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.810879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.713569  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:25.213852  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:22.676054  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.176662  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.676911  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.175927  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:24.676685  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.176625  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:25.676281  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.176650  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:26.675943  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.176834  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:23.894745  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:26.394535  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.676594  451984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:27.846642  451984 kubeadm.go:1088] duration metric: took 12.610179243s to wait for elevateKubeSystemPrivileges.
	I0109 00:14:27.846694  451984 kubeadm.go:406] StartCluster complete in 5m12.860674926s
	I0109 00:14:27.846775  451984 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.846922  451984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:14:27.849568  451984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:27.849886  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:14:27.850039  451984 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:14:27.850143  451984 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:14:27.850168  451984 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845373"
	I0109 00:14:27.850185  451984 addons.go:69] Setting metrics-server=true in profile "embed-certs-845373"
	I0109 00:14:27.850196  451984 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-845373"
	W0109 00:14:27.850206  451984 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:14:27.850209  451984 addons.go:237] Setting addon metrics-server=true in "embed-certs-845373"
	W0109 00:14:27.850226  451984 addons.go:246] addon metrics-server should already be in state true
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850308  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.850780  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850804  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850886  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.850916  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.850174  451984 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845373"
	I0109 00:14:27.850983  451984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845373"
	I0109 00:14:27.851436  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.851473  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.869118  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I0109 00:14:27.869634  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.870272  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.870301  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.870793  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.870883  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0109 00:14:27.871047  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0109 00:14:27.871320  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871380  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.871694  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.871740  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.871880  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871910  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.871917  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.871934  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.872311  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872318  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.872472  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.872864  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.872907  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.875833  451984 addons.go:237] Setting addon default-storageclass=true in "embed-certs-845373"
	W0109 00:14:27.875851  451984 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:14:27.875874  451984 host.go:66] Checking if "embed-certs-845373" exists ...
	I0109 00:14:27.876143  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.876172  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0109 00:14:27.892642  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0109 00:14:27.892603  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0109 00:14:27.893165  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893218  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893382  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.893725  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893751  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.893889  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.893906  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894287  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894344  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894351  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.894366  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.894531  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.894905  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.894920  451984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:14:27.894955  451984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:14:27.895325  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.897315  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.897565  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.899343  451984 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:14:27.901058  451984 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:14:27.903097  451984 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:27.903113  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:14:27.903129  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.901085  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:14:27.903182  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:14:27.903190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.907703  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908100  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908474  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908505  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908744  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.908765  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.908869  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.908924  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.909079  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909118  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.909274  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909303  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.909444  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.909660  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:27.913404  451984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0109 00:14:27.913992  451984 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:14:27.914388  451984 main.go:141] libmachine: Using API Version  1
	I0109 00:14:27.914409  451984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:14:27.914831  451984 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:14:27.915055  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetState
	I0109 00:14:27.916650  451984 main.go:141] libmachine: (embed-certs-845373) Calling .DriverName
	I0109 00:14:27.916872  451984 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:27.916891  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:14:27.916911  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHHostname
	I0109 00:14:27.919557  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.919945  451984 main.go:141] libmachine: (embed-certs-845373) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:26:23", ip: ""} in network mk-embed-certs-845373: {Iface:virbr4 ExpiryTime:2024-01-09 01:09:00 +0000 UTC Type:0 Mac:52:54:00:5b:26:23 Iaid: IPaddr:192.168.50.132 Prefix:24 Hostname:embed-certs-845373 Clientid:01:52:54:00:5b:26:23}
	I0109 00:14:27.919962  451984 main.go:141] libmachine: (embed-certs-845373) DBG | domain embed-certs-845373 has defined IP address 192.168.50.132 and MAC address 52:54:00:5b:26:23 in network mk-embed-certs-845373
	I0109 00:14:27.920188  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHPort
	I0109 00:14:27.920346  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHKeyPath
	I0109 00:14:27.920520  451984 main.go:141] libmachine: (embed-certs-845373) Calling .GetSSHUsername
	I0109 00:14:27.920627  451984 sshutil.go:53] new ssh client: &{IP:192.168.50.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/embed-certs-845373/id_rsa Username:docker}
	I0109 00:14:28.169436  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:14:28.180527  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:28.194004  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:14:28.194025  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:14:28.216619  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:28.258292  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:14:28.258321  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:14:28.320624  451984 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:28.320652  451984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:14:28.355471  451984 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-845373" context rescaled to 1 replicas
	I0109 00:14:28.355514  451984 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.132 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:14:28.357573  451984 out.go:177] * Verifying Kubernetes components...
	I0109 00:14:25.309676  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:27.312462  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:29.810262  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:28.359075  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:28.379542  451984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:14:30.061115  451984 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.891626144s)
	I0109 00:14:30.061149  451984 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0109 00:14:30.452861  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.236197297s)
	I0109 00:14:30.452929  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.452943  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.452943  451984 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.09383281s)
	I0109 00:14:30.453122  451984 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.453131  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.272573904s)
	I0109 00:14:30.453293  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453306  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453320  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453311  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453332  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453342  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.453674  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453693  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453700  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453708  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.453740  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.453752  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.453764  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.453784  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.454074  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.454093  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.454107  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.457209  451984 node_ready.go:49] node "embed-certs-845373" has status "Ready":"True"
	I0109 00:14:30.457229  451984 node_ready.go:38] duration metric: took 4.077361ms waiting for node "embed-certs-845373" to be "Ready" ...
	I0109 00:14:30.457238  451984 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:30.488244  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.488275  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.488609  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.488634  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.488660  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.489887  451984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:30.508615  451984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.129028413s)
	I0109 00:14:30.508663  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.508677  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.508966  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.509058  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509152  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509175  451984 main.go:141] libmachine: Making call to close driver server
	I0109 00:14:30.509190  451984 main.go:141] libmachine: (embed-certs-845373) Calling .Close
	I0109 00:14:30.509535  451984 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:14:30.509564  451984 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:14:30.509578  451984 addons.go:473] Verifying addon metrics-server=true in "embed-certs-845373"
	I0109 00:14:30.509582  451984 main.go:141] libmachine: (embed-certs-845373) DBG | Closing plugin on server side
	I0109 00:14:30.511636  451984 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0109 00:14:27.714663  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.213049  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.513246  451984 addons.go:508] enable addons completed in 2.663216413s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0109 00:14:31.999091  451984 pod_ready.go:92] pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:31.999122  451984 pod_ready.go:81] duration metric: took 1.509214799s waiting for pod "coredns-5dd5756b68-j5mzp" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:31.999131  451984 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005047  451984 pod_ready.go:92] pod "etcd-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.005077  451984 pod_ready.go:81] duration metric: took 5.937291ms waiting for pod "etcd-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.005091  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011823  451984 pod_ready.go:92] pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.011853  451984 pod_ready.go:81] duration metric: took 6.752071ms waiting for pod "kube-apiserver-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.011866  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017760  451984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.017782  451984 pod_ready.go:81] duration metric: took 5.908986ms waiting for pod "kube-controller-manager-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.017792  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058063  451984 pod_ready.go:92] pod "kube-proxy-nxtn2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.058094  451984 pod_ready.go:81] duration metric: took 40.295825ms waiting for pod "kube-proxy-nxtn2" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.058104  451984 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:28.397781  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:30.894153  452488 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:31.394151  452488 pod_ready.go:81] duration metric: took 4m0.008881128s waiting for pod "metrics-server-57f55c9bc5-mbf7k" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:31.394180  452488 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:14:31.394191  452488 pod_ready.go:38] duration metric: took 4m0.808517944s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:31.394210  452488 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:14:31.394307  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:31.394397  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:31.457897  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:31.457929  452488 cri.go:89] found id: ""
	I0109 00:14:31.457941  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:31.458002  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.463534  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:31.463632  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:31.524249  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:31.524284  452488 cri.go:89] found id: ""
	I0109 00:14:31.524296  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:31.524363  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.529188  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:31.529260  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:31.583505  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:31.583543  452488 cri.go:89] found id: ""
	I0109 00:14:31.583554  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:31.583618  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.589373  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:31.589466  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:31.639895  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:31.639931  452488 cri.go:89] found id: ""
	I0109 00:14:31.639942  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:31.640016  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.644881  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:31.644952  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:31.686002  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.686031  452488 cri.go:89] found id: ""
	I0109 00:14:31.686047  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:31.686114  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.691664  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:31.691754  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:31.745729  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:31.745757  452488 cri.go:89] found id: ""
	I0109 00:14:31.745766  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:31.745829  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.751116  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:31.751192  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:31.794856  452488 cri.go:89] found id: ""
	I0109 00:14:31.794890  452488 logs.go:284] 0 containers: []
	W0109 00:14:31.794901  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:31.794909  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:31.794976  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:31.840973  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:31.840999  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:31.841006  452488 cri.go:89] found id: ""
	I0109 00:14:31.841014  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:31.841084  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.845852  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:31.850824  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:31.850851  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:31.914344  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:31.914404  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:31.958899  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:31.958934  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:32.021319  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:32.021353  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:32.074995  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:32.075034  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:32.089535  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:32.089572  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:32.244418  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:32.244460  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:32.288116  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:32.288161  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:32.332939  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:32.332980  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:32.378455  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:32.378487  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:32.437376  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:32.437421  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:31.813208  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.311338  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.215522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:34.712223  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.460309  451984 pod_ready.go:92] pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:32.460343  451984 pod_ready.go:81] duration metric: took 402.230769ms waiting for pod "kube-scheduler-embed-certs-845373" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:32.460358  451984 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:34.470103  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.470854  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:32.911300  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:32.911345  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:32.959902  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:32.959942  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.500402  452488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:14:35.516569  452488 api_server.go:72] duration metric: took 4m10.712558057s to wait for apiserver process to appear ...
	I0109 00:14:35.516600  452488 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:14:35.516640  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:35.516690  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:35.559395  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:35.559421  452488 cri.go:89] found id: ""
	I0109 00:14:35.559429  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:35.559497  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.564381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:35.564468  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:35.604963  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:35.604991  452488 cri.go:89] found id: ""
	I0109 00:14:35.605004  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:35.605074  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.610352  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:35.610412  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:35.655316  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:35.655353  452488 cri.go:89] found id: ""
	I0109 00:14:35.655381  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:35.655471  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.660932  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:35.661015  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:35.702201  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:35.702228  452488 cri.go:89] found id: ""
	I0109 00:14:35.702237  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:35.702297  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.707544  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:35.707615  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:35.755445  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:35.755478  452488 cri.go:89] found id: ""
	I0109 00:14:35.755489  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:35.755555  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.760393  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:35.760470  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:35.813641  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:35.813672  452488 cri.go:89] found id: ""
	I0109 00:14:35.813682  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:35.813749  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.819342  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:35.819495  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:35.861693  452488 cri.go:89] found id: ""
	I0109 00:14:35.861723  452488 logs.go:284] 0 containers: []
	W0109 00:14:35.861732  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:35.861740  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:35.861807  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:35.900886  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:35.900931  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:35.900937  452488 cri.go:89] found id: ""
	I0109 00:14:35.900945  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:35.901005  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.905463  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:35.910271  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:35.910300  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:36.056761  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:36.056798  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:36.096707  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:36.096739  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:36.555891  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:36.555936  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:36.573167  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:36.573196  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:36.622139  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:36.622169  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:36.680395  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:36.680435  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:36.740350  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:36.740389  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:36.779409  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:36.779443  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:36.837425  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:36.837474  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:36.892724  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:36.892763  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:36.939944  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:36.939979  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:36.999567  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:36.999612  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:36.810729  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.810924  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:36.713630  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.213516  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:38.970746  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.468803  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:39.546015  452488 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8444/healthz ...
	I0109 00:14:39.551932  452488 api_server.go:279] https://192.168.39.73:8444/healthz returned 200:
	ok
	I0109 00:14:39.553444  452488 api_server.go:141] control plane version: v1.28.4
	I0109 00:14:39.553469  452488 api_server.go:131] duration metric: took 4.036861283s to wait for apiserver health ...
	I0109 00:14:39.553480  452488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:14:39.553512  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:14:39.553592  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:14:39.597338  452488 cri.go:89] found id: "fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:39.597368  452488 cri.go:89] found id: ""
	I0109 00:14:39.597381  452488 logs.go:284] 1 containers: [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc]
	I0109 00:14:39.597450  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.602381  452488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:14:39.602473  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:14:39.643738  452488 cri.go:89] found id: "8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:39.643776  452488 cri.go:89] found id: ""
	I0109 00:14:39.643787  452488 logs.go:284] 1 containers: [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823]
	I0109 00:14:39.643854  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.649021  452488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:14:39.649096  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:14:39.692903  452488 cri.go:89] found id: "bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:39.692926  452488 cri.go:89] found id: ""
	I0109 00:14:39.692934  452488 logs.go:284] 1 containers: [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd]
	I0109 00:14:39.692992  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.697806  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:14:39.697882  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:14:39.746679  452488 cri.go:89] found id: "a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:39.746706  452488 cri.go:89] found id: ""
	I0109 00:14:39.746716  452488 logs.go:284] 1 containers: [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c]
	I0109 00:14:39.746765  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.752396  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:14:39.752459  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:14:39.800438  452488 cri.go:89] found id: "301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:39.800461  452488 cri.go:89] found id: ""
	I0109 00:14:39.800470  452488 logs.go:284] 1 containers: [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc]
	I0109 00:14:39.800535  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.805644  452488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:14:39.805737  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:14:39.847341  452488 cri.go:89] found id: "2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:39.847387  452488 cri.go:89] found id: ""
	I0109 00:14:39.847398  452488 logs.go:284] 1 containers: [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46]
	I0109 00:14:39.847465  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.851972  452488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:14:39.852053  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:14:39.899183  452488 cri.go:89] found id: ""
	I0109 00:14:39.899219  452488 logs.go:284] 0 containers: []
	W0109 00:14:39.899231  452488 logs.go:286] No container was found matching "kindnet"
	I0109 00:14:39.899239  452488 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:14:39.899309  452488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:14:39.958353  452488 cri.go:89] found id: "a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:39.958395  452488 cri.go:89] found id: "f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:39.958400  452488 cri.go:89] found id: ""
	I0109 00:14:39.958409  452488 logs.go:284] 2 containers: [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57]
	I0109 00:14:39.958469  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.963264  452488 ssh_runner.go:195] Run: which crictl
	I0109 00:14:39.968827  452488 logs.go:123] Gathering logs for kube-scheduler [a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c] ...
	I0109 00:14:39.968858  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a457619a25952d5a45979b918cbfe53c330b3d71adfef120a95d22e3415efe2c"
	I0109 00:14:40.015655  452488 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:14:40.015685  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:14:40.161910  452488 logs.go:123] Gathering logs for coredns [bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd] ...
	I0109 00:14:40.161944  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd1948e3c50bc8fa302460b8818eebc5d4272f073edda7da022ad1274d6b17cd"
	I0109 00:14:40.200197  452488 logs.go:123] Gathering logs for kube-proxy [301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc] ...
	I0109 00:14:40.200233  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301f60b371271e949a16c7163eca971fbf85a6c4ed96fe22a3fd1a6227a2efdc"
	I0109 00:14:40.244075  452488 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:14:40.244119  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:14:40.655095  452488 logs.go:123] Gathering logs for container status ...
	I0109 00:14:40.655160  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:14:40.711957  452488 logs.go:123] Gathering logs for kube-apiserver [fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc] ...
	I0109 00:14:40.712004  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc9430c284b97fd578d4bf0b59d0f5c367706b28179fbb13495e8f66cd24d8cc"
	I0109 00:14:40.765456  452488 logs.go:123] Gathering logs for etcd [8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823] ...
	I0109 00:14:40.765503  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc2cc6a6ffc0c8e62daaa04ea093502372b2ea553adbeee4099d2b631339823"
	I0109 00:14:40.824273  452488 logs.go:123] Gathering logs for kube-controller-manager [2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46] ...
	I0109 00:14:40.824320  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a0d4cebebe6e073e59c425be0d32547d87b91c78c09b1fe9b0b72672975fc46"
	I0109 00:14:40.887213  452488 logs.go:123] Gathering logs for storage-provisioner [a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7] ...
	I0109 00:14:40.887252  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd42aafbd15888a2b0b5d034ec85668754615f626d776b261ef0a9b39b2fd7"
	I0109 00:14:40.925809  452488 logs.go:123] Gathering logs for storage-provisioner [f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57] ...
	I0109 00:14:40.925842  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2c5c87fdbe85eabea075835c68ada5813c73a60e4b1dde98acf96b19e8bee57"
	I0109 00:14:40.967599  452488 logs.go:123] Gathering logs for kubelet ...
	I0109 00:14:40.967635  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:14:41.021163  452488 logs.go:123] Gathering logs for dmesg ...
	I0109 00:14:41.021219  452488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:14:43.543901  452488 system_pods.go:59] 8 kube-system pods found
	I0109 00:14:43.543933  452488 system_pods.go:61] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.543938  452488 system_pods.go:61] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.543943  452488 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.543947  452488 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.543951  452488 system_pods.go:61] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.543955  452488 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.543962  452488 system_pods.go:61] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.543966  452488 system_pods.go:61] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.543974  452488 system_pods.go:74] duration metric: took 3.990487712s to wait for pod list to return data ...
	I0109 00:14:43.543982  452488 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:14:43.547032  452488 default_sa.go:45] found service account: "default"
	I0109 00:14:43.547063  452488 default_sa.go:55] duration metric: took 3.07377ms for default service account to be created ...
	I0109 00:14:43.547075  452488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:14:43.554265  452488 system_pods.go:86] 8 kube-system pods found
	I0109 00:14:43.554305  452488 system_pods.go:89] "coredns-5dd5756b68-csrwr" [2c3945dd-9c1f-4224-a8f4-c9abc2ac42e4] Running
	I0109 00:14:43.554314  452488 system_pods.go:89] "etcd-default-k8s-diff-port-834116" [af478bb1-7e28-471c-b193-7b2834d42779] Running
	I0109 00:14:43.554322  452488 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-834116" [18a0493b-c574-4eb2-b268-de4d1e96b0b4] Running
	I0109 00:14:43.554329  452488 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-834116" [d23788eb-5c87-4151-8a4d-40aee7bc997a] Running
	I0109 00:14:43.554336  452488 system_pods.go:89] "kube-proxy-p9dmf" [bbf297f4-2dc1-48b8-9fd6-830c17bf25fc] Running
	I0109 00:14:43.554343  452488 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-834116" [3e514c3e-b439-45b6-afd1-9de6ca1629ce] Running
	I0109 00:14:43.554356  452488 system_pods.go:89] "metrics-server-57f55c9bc5-mbf7k" [61b7ea36-0b24-42e9-9937-d20ea545f63d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:14:43.554397  452488 system_pods.go:89] "storage-provisioner" [49bd18e5-b0c3-4eaa-83e6-2d347d47e505] Running
	I0109 00:14:43.554420  452488 system_pods.go:126] duration metric: took 7.336546ms to wait for k8s-apps to be running ...
	I0109 00:14:43.554431  452488 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:14:43.554494  452488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:43.570839  452488 system_svc.go:56] duration metric: took 16.394034ms WaitForService to wait for kubelet.
	I0109 00:14:43.570874  452488 kubeadm.go:581] duration metric: took 4m18.766870325s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:14:43.570904  452488 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:14:43.575087  452488 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:14:43.575115  452488 node_conditions.go:123] node cpu capacity is 2
	I0109 00:14:43.575127  452488 node_conditions.go:105] duration metric: took 4.218446ms to run NodePressure ...
	I0109 00:14:43.575139  452488 start.go:228] waiting for startup goroutines ...
	I0109 00:14:43.575145  452488 start.go:233] waiting for cluster config update ...
	I0109 00:14:43.575154  452488 start.go:242] writing updated cluster config ...
	I0109 00:14:43.575452  452488 ssh_runner.go:195] Run: rm -f paused
	I0109 00:14:43.636407  452488 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:14:43.638597  452488 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-834116" cluster and "default" namespace by default
	I0109 00:14:40.814426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.310989  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:41.214186  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.714118  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:43.968087  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.968943  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:45.809788  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:47.810189  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:46.213897  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.714327  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.716636  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:48.472384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.473405  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:50.310188  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.311048  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.803108  452237 pod_ready.go:81] duration metric: took 4m0.001087466s waiting for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" ...
	E0109 00:14:52.803148  452237 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-th24j" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:14:52.803179  452237 pod_ready.go:38] duration metric: took 4m43.413410939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:52.803217  452237 kubeadm.go:640] restartCluster took 5m4.419560589s
	W0109 00:14:52.803342  452237 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:14:52.803433  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:14:53.213308  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.215229  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:52.972718  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:55.470546  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.714170  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:00.213742  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:57.968558  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:59.969971  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:01.970573  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:02.713539  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:05.213339  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:04.470909  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:06.976278  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:07.153986  452237 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.350512063s)
	I0109 00:15:07.154091  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:07.169206  452237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:07.180120  452237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:07.190689  452237 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:07.190746  452237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:15:07.249723  452237 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0109 00:15:07.249803  452237 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:07.413454  452237 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:07.413648  452237 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:07.413809  452237 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:07.666677  452237 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:07.668620  452237 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:07.668736  452237 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:07.668869  452237 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:07.669044  452237 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:07.669122  452237 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:07.669206  452237 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:07.669265  452237 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:07.669338  452237 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:07.669409  452237 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:07.669493  452237 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:07.669587  452237 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:07.669632  452237 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:07.669698  452237 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:07.892774  452237 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:08.387341  452237 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0109 00:15:08.697850  452237 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:09.110380  452237 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:09.182970  452237 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:09.183625  452237 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:09.186350  452237 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:09.188402  452237 out.go:204]   - Booting up control plane ...
	I0109 00:15:09.188494  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:09.188620  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:09.190877  452237 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:09.210069  452237 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:09.213806  452237 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:09.214168  452237 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:15:09.348180  452237 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:07.713522  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:10.212932  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:09.468413  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:11.472366  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:12.214158  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:14.713831  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:13.968332  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:15.970174  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:17.853084  452237 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502974 seconds
	I0109 00:15:17.871025  452237 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:17.897430  452237 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:18.444483  452237 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:18.444785  452237 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-378213 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:15:18.959611  452237 kubeadm.go:322] [bootstrap-token] Using token: dhjf8u.939ptni0q22ypfw8
	I0109 00:15:18.961445  452237 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:18.961621  452237 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:18.976769  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:15:18.986315  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:18.991512  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:18.996317  452237 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:19.001219  452237 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:19.018739  452237 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:15:19.300703  452237 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:19.384320  452237 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:19.385524  452237 kubeadm.go:322] 
	I0109 00:15:19.385609  452237 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:19.385646  452237 kubeadm.go:322] 
	I0109 00:15:19.385746  452237 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:19.385759  452237 kubeadm.go:322] 
	I0109 00:15:19.385780  452237 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:19.385851  452237 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:19.385894  452237 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:19.385902  452237 kubeadm.go:322] 
	I0109 00:15:19.385976  452237 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:15:19.385984  452237 kubeadm.go:322] 
	I0109 00:15:19.386052  452237 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:15:19.386063  452237 kubeadm.go:322] 
	I0109 00:15:19.386140  452237 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:19.386255  452237 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:19.386338  452237 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:19.386348  452237 kubeadm.go:322] 
	I0109 00:15:19.386445  452237 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:15:19.386563  452237 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:19.386588  452237 kubeadm.go:322] 
	I0109 00:15:19.386704  452237 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.386865  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:19.386893  452237 kubeadm.go:322] 	--control-plane 
	I0109 00:15:19.386900  452237 kubeadm.go:322] 
	I0109 00:15:19.387013  452237 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:19.387023  452237 kubeadm.go:322] 
	I0109 00:15:19.387156  452237 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dhjf8u.939ptni0q22ypfw8 \
	I0109 00:15:19.387306  452237 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:19.388274  452237 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:19.388386  452237 cni.go:84] Creating CNI manager for ""
	I0109 00:15:19.388404  452237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:19.390641  452237 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:19.392729  452237 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:19.420375  452237 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:19.480953  452237 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:19.481036  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.481070  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=no-preload-378213 minikube.k8s.io/updated_at=2024_01_09T00_15_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:19.529444  452237 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:19.828947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:17.214395  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:19.714562  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:18.467657  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.469306  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:20.329278  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:20.829730  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.329756  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.829370  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.329549  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:22.829161  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.329937  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:23.829891  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.329077  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:24.829276  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:21.715433  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.214554  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:22.469602  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:24.968838  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:25.329025  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:25.829279  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.329947  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.829794  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.329030  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:27.829080  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.329613  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:28.829372  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.329826  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:29.829063  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:26.712393  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:28.715010  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:30.329991  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:30.829320  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.329115  452237 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:31.423331  452237 kubeadm.go:1088] duration metric: took 11.942366757s to wait for elevateKubeSystemPrivileges.
	I0109 00:15:31.423377  452237 kubeadm.go:406] StartCluster complete in 5m43.086225729s
	I0109 00:15:31.423405  452237 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.423510  452237 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:15:31.425917  452237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:15:31.426178  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:15:31.426284  452237 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:15:31.426369  452237 addons.go:69] Setting storage-provisioner=true in profile "no-preload-378213"
	I0109 00:15:31.426384  452237 addons.go:69] Setting default-storageclass=true in profile "no-preload-378213"
	I0109 00:15:31.426397  452237 addons.go:237] Setting addon storage-provisioner=true in "no-preload-378213"
	W0109 00:15:31.426409  452237 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:15:31.426432  452237 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-378213"
	I0109 00:15:31.426447  452237 addons.go:69] Setting metrics-server=true in profile "no-preload-378213"
	I0109 00:15:31.426476  452237 addons.go:237] Setting addon metrics-server=true in "no-preload-378213"
	W0109 00:15:31.426484  452237 addons.go:246] addon metrics-server should already be in state true
	I0109 00:15:31.426485  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426540  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.426434  452237 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:15:31.426891  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426905  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.426918  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426927  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.426931  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.446291  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0109 00:15:31.446423  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0109 00:15:31.446819  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0109 00:15:31.447018  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.447639  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.447724  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.447854  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.448095  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448259  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448288  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448354  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.448439  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.448465  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.448921  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.448997  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.449699  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449744  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.449757  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.449785  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.452784  452237 addons.go:237] Setting addon default-storageclass=true in "no-preload-378213"
	W0109 00:15:31.452809  452237 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:15:31.452841  452237 host.go:66] Checking if "no-preload-378213" exists ...
	I0109 00:15:31.454376  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.454416  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.467638  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0109 00:15:31.468325  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.468901  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.468921  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.469339  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.469563  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.471409  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.473329  452237 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:15:31.474680  452237 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.474693  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:15:31.474706  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.473604  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0109 00:15:31.474062  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0109 00:15:31.475095  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475399  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.475612  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.475627  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.475979  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.476163  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.477959  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.479656  452237 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:15:31.478629  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.479280  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.479557  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.480974  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.481058  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:15:31.481066  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:15:31.481079  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.481110  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.481128  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.481308  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.481878  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.482384  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.483085  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.483645  452237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:15:31.483668  452237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:15:31.484708  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485095  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.485117  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.485318  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.487608  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.487807  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.487999  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.499347  452237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0109 00:15:31.499913  452237 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:15:31.500547  452237 main.go:141] libmachine: Using API Version  1
	I0109 00:15:31.500570  452237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:15:31.500917  452237 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:15:31.501145  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetState
	I0109 00:15:31.503016  452237 main.go:141] libmachine: (no-preload-378213) Calling .DriverName
	I0109 00:15:31.503296  452237 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.503310  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:15:31.503325  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHHostname
	I0109 00:15:31.506091  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506397  452237 main.go:141] libmachine: (no-preload-378213) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:ef:49", ip: ""} in network mk-no-preload-378213: {Iface:virbr3 ExpiryTime:2024-01-09 01:09:21 +0000 UTC Type:0 Mac:52:54:00:34:ef:49 Iaid: IPaddr:192.168.61.62 Prefix:24 Hostname:no-preload-378213 Clientid:01:52:54:00:34:ef:49}
	I0109 00:15:31.506455  452237 main.go:141] libmachine: (no-preload-378213) DBG | domain no-preload-378213 has defined IP address 192.168.61.62 and MAC address 52:54:00:34:ef:49 in network mk-no-preload-378213
	I0109 00:15:31.506652  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHPort
	I0109 00:15:31.506831  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHKeyPath
	I0109 00:15:31.506978  452237 main.go:141] libmachine: (no-preload-378213) Calling .GetSSHUsername
	I0109 00:15:31.507091  452237 sshutil.go:53] new ssh client: &{IP:192.168.61.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/no-preload-378213/id_rsa Username:docker}
	I0109 00:15:31.624782  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:15:31.642826  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:15:31.663296  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:15:31.710300  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:15:31.710330  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:15:31.787478  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:15:31.787517  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:15:31.871349  452237 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:31.871407  452237 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:15:31.968192  452237 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:15:32.072474  452237 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-378213" context rescaled to 1 replicas
	I0109 00:15:32.072532  452237 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.62 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:15:32.074625  452237 out.go:177] * Verifying Kubernetes components...
	I0109 00:15:27.468923  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:29.971742  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:32.075944  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:32.439632  452237 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0109 00:15:32.439722  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.439751  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440089  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440193  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.440209  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.440219  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.440166  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440559  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.440571  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.440580  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.497313  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.497346  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.497717  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.497747  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901192  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.237846158s)
	I0109 00:15:32.901262  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901276  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901654  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.901703  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:32.901719  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:32.901730  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:32.901662  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902029  452237 main.go:141] libmachine: (no-preload-378213) DBG | Closing plugin on server side
	I0109 00:15:32.902069  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:32.902079  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030220  452237 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.061947007s)
	I0109 00:15:33.030237  452237 node_ready.go:35] waiting up to 6m0s for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.030290  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030308  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.030694  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.030714  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.030725  452237 main.go:141] libmachine: Making call to close driver server
	I0109 00:15:33.030734  452237 main.go:141] libmachine: (no-preload-378213) Calling .Close
	I0109 00:15:33.031003  452237 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:15:33.031022  452237 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:15:33.031034  452237 addons.go:473] Verifying addon metrics-server=true in "no-preload-378213"
	I0109 00:15:33.032849  452237 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:15:33.034106  452237 addons.go:508] enable addons completed in 1.60782305s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:15:33.044548  452237 node_ready.go:49] node "no-preload-378213" has status "Ready":"True"
	I0109 00:15:33.044577  452237 node_ready.go:38] duration metric: took 14.31045ms waiting for node "no-preload-378213" to be "Ready" ...
	I0109 00:15:33.044592  452237 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.060577  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:34.066536  452237 pod_ready.go:97] error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066570  452237 pod_ready.go:81] duration metric: took 1.005962139s waiting for pod "coredns-76f75df574-jm9gw" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:34.066584  452237 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-jm9gw" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-jm9gw" not found
	I0109 00:15:34.066594  452237 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:31.213050  451943 pod_ready.go:102] pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:33.206836  451943 pod_ready.go:81] duration metric: took 4m0.000952779s waiting for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" ...
	E0109 00:15:33.206864  451943 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-8z889" in "kube-system" namespace to be "Ready" (will not retry!)
	I0109 00:15:33.206884  451943 pod_ready.go:38] duration metric: took 4m1.199765303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:15:33.206916  451943 kubeadm.go:640] restartCluster took 5m9.054273444s
	W0109 00:15:33.206995  451943 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0109 00:15:33.207029  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0109 00:15:32.469904  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:34.969702  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:36.074768  452237 pod_ready.go:92] pod "coredns-76f75df574-ztvgr" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.074793  452237 pod_ready.go:81] duration metric: took 2.008191718s waiting for pod "coredns-76f75df574-ztvgr" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.074803  452237 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080586  452237 pod_ready.go:92] pod "etcd-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.080610  452237 pod_ready.go:81] duration metric: took 5.80009ms waiting for pod "etcd-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.080623  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.085972  452237 pod_ready.go:92] pod "kube-apiserver-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.085995  452237 pod_ready.go:81] duration metric: took 5.365045ms waiting for pod "kube-apiserver-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.086004  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091275  452237 pod_ready.go:92] pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.091295  452237 pod_ready.go:81] duration metric: took 5.284302ms waiting for pod "kube-controller-manager-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.091306  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095919  452237 pod_ready.go:92] pod "kube-proxy-4vnf5" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.095938  452237 pod_ready.go:81] duration metric: took 4.624685ms waiting for pod "kube-proxy-4vnf5" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.095949  452237 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471021  452237 pod_ready.go:92] pod "kube-scheduler-no-preload-378213" in "kube-system" namespace has status "Ready":"True"
	I0109 00:15:36.471051  452237 pod_ready.go:81] duration metric: took 375.093915ms waiting for pod "kube-scheduler-no-preload-378213" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:36.471066  452237 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	I0109 00:15:38.478891  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.932714  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.725641704s)
	I0109 00:15:39.932824  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:15:39.949655  451943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:15:39.967317  451943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:15:39.983553  451943 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:15:39.983602  451943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0109 00:15:40.196509  451943 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:15:37.468440  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:39.468561  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:41.468728  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:40.481038  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:42.979928  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:43.468928  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.968791  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:45.479525  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.981785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:49.988192  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:47.970158  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:50.469209  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.798385  451943 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0109 00:15:53.798458  451943 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:15:53.798557  451943 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:15:53.798719  451943 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:15:53.798863  451943 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:15:53.799001  451943 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:15:53.799122  451943 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:15:53.799199  451943 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0109 00:15:53.799296  451943 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:15:53.800918  451943 out.go:204]   - Generating certificates and keys ...
	I0109 00:15:53.801030  451943 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:15:53.801108  451943 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:15:53.801199  451943 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0109 00:15:53.801284  451943 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0109 00:15:53.801342  451943 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0109 00:15:53.801386  451943 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0109 00:15:53.801441  451943 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0109 00:15:53.801491  451943 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0109 00:15:53.801563  451943 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0109 00:15:53.801654  451943 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0109 00:15:53.801710  451943 kubeadm.go:322] [certs] Using the existing "sa" key
	I0109 00:15:53.801776  451943 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:15:53.801841  451943 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:15:53.801885  451943 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:15:53.801935  451943 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:15:53.802013  451943 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:15:53.802097  451943 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:15:53.803572  451943 out.go:204]   - Booting up control plane ...
	I0109 00:15:53.803682  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:15:53.803757  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:15:53.803811  451943 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:15:53.803932  451943 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:15:53.804150  451943 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:15:53.804251  451943 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.506007 seconds
	I0109 00:15:53.804388  451943 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:15:53.804541  451943 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:15:53.804628  451943 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:15:53.804832  451943 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-003293 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0109 00:15:53.804900  451943 kubeadm.go:322] [bootstrap-token] Using token: 4iop3a.ft6ghwlgcg45v9u4
	I0109 00:15:53.806501  451943 out.go:204]   - Configuring RBAC rules ...
	I0109 00:15:53.806592  451943 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:15:53.806724  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:15:53.806832  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:15:53.806959  451943 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:15:53.807033  451943 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:15:53.807071  451943 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:15:53.807109  451943 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:15:53.807115  451943 kubeadm.go:322] 
	I0109 00:15:53.807175  451943 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:15:53.807199  451943 kubeadm.go:322] 
	I0109 00:15:53.807319  451943 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:15:53.807328  451943 kubeadm.go:322] 
	I0109 00:15:53.807353  451943 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:15:53.807457  451943 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:15:53.807531  451943 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:15:53.807541  451943 kubeadm.go:322] 
	I0109 00:15:53.807594  451943 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:15:53.807668  451943 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:15:53.807746  451943 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:15:53.807766  451943 kubeadm.go:322] 
	I0109 00:15:53.807884  451943 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0109 00:15:53.807989  451943 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:15:53.807998  451943 kubeadm.go:322] 
	I0109 00:15:53.808083  451943 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808215  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:15:53.808267  451943 kubeadm.go:322]     --control-plane 	  
	I0109 00:15:53.808282  451943 kubeadm.go:322] 
	I0109 00:15:53.808416  451943 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:15:53.808431  451943 kubeadm.go:322] 
	I0109 00:15:53.808535  451943 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4iop3a.ft6ghwlgcg45v9u4 \
	I0109 00:15:53.808635  451943 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:15:53.808646  451943 cni.go:84] Creating CNI manager for ""
	I0109 00:15:53.808655  451943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:15:53.810445  451943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:15:52.478401  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.478468  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:53.812384  451943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:15:53.822034  451943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:15:53.841918  451943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:15:53.842007  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.842023  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=old-k8s-version-003293 minikube.k8s.io/updated_at=2024_01_09T00_15_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:53.878580  451943 ops.go:34] apiserver oom_adj: -16
	I0109 00:15:54.119184  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:54.619596  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.119468  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:55.619508  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:52.969233  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:54.969384  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.969570  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.978217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:59.478428  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:15:56.119299  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:56.620179  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.119526  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:57.619985  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.119330  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:58.619572  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.120142  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.619498  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.119329  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:00.620206  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:15:59.468767  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.969313  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.978314  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:03.979583  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:01.120279  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:01.619668  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:02.620169  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.120249  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.619563  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.119962  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:04.619912  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.120243  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:05.620114  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:03.971649  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.468683  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:05.980829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:08.479315  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:06.119938  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:06.619543  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.119220  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:07.619392  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.119991  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:08.619517  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.120205  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:09.620121  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.119909  451943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:16:10.273872  451943 kubeadm.go:1088] duration metric: took 16.431936842s to wait for elevateKubeSystemPrivileges.
	I0109 00:16:10.273910  451943 kubeadm.go:406] StartCluster complete in 5m46.185018744s
	I0109 00:16:10.273961  451943 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.274054  451943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:16:10.275851  451943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:16:10.276124  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:16:10.276261  451943 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:16:10.276362  451943 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276373  451943 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276388  451943 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-003293"
	I0109 00:16:10.276394  451943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-003293"
	I0109 00:16:10.276390  451943 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-003293"
	I0109 00:16:10.276415  451943 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-003293"
	W0109 00:16:10.276428  451943 addons.go:246] addon metrics-server should already be in state true
	I0109 00:16:10.276454  451943 config.go:182] Loaded profile config "old-k8s-version-003293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0109 00:16:10.276481  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	W0109 00:16:10.276397  451943 addons.go:246] addon storage-provisioner should already be in state true
	I0109 00:16:10.276544  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.276864  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276880  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276867  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.276941  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.276955  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.277062  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.294099  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39941
	I0109 00:16:10.294268  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0109 00:16:10.294410  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0109 00:16:10.294718  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294768  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.294925  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.295279  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295305  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295388  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295419  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295397  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.295480  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.295693  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295769  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.295788  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.296012  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.296310  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.296357  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.297119  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.297171  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.299887  451943 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-003293"
	W0109 00:16:10.299910  451943 addons.go:246] addon default-storageclass should already be in state true
	I0109 00:16:10.299946  451943 host.go:66] Checking if "old-k8s-version-003293" exists ...
	I0109 00:16:10.300224  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.300263  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.313007  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0109 00:16:10.313533  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.314010  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.314026  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.314437  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.314622  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.315598  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0109 00:16:10.316247  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.316532  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.318734  451943 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0109 00:16:10.317343  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.317379  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0109 00:16:10.320285  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:16:10.320308  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:16:10.320329  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.320333  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.320705  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.320963  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.321103  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.321233  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.321247  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.321761  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.322210  451943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:16:10.322242  451943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:16:10.323835  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.324029  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.324152  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.324177  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.326057  451943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:16:10.324406  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.328066  451943 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.328087  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:16:10.328096  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.328124  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.328784  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.329014  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.331395  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.331785  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.331810  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.332001  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.332191  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.332335  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.332480  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.347123  451943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0109 00:16:10.347716  451943 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:16:10.348691  451943 main.go:141] libmachine: Using API Version  1
	I0109 00:16:10.348719  451943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:16:10.349127  451943 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:16:10.349342  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetState
	I0109 00:16:10.350834  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .DriverName
	I0109 00:16:10.351133  451943 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.351149  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:16:10.351168  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHHostname
	I0109 00:16:10.354189  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354621  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0e:b5", ip: ""} in network mk-old-k8s-version-003293: {Iface:virbr2 ExpiryTime:2024-01-09 01:10:07 +0000 UTC Type:0 Mac:52:54:00:38:0e:b5 Iaid: IPaddr:192.168.72.81 Prefix:24 Hostname:old-k8s-version-003293 Clientid:01:52:54:00:38:0e:b5}
	I0109 00:16:10.354668  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | domain old-k8s-version-003293 has defined IP address 192.168.72.81 and MAC address 52:54:00:38:0e:b5 in network mk-old-k8s-version-003293
	I0109 00:16:10.354909  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHPort
	I0109 00:16:10.355119  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHKeyPath
	I0109 00:16:10.355294  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .GetSSHUsername
	I0109 00:16:10.355481  451943 sshutil.go:53] new ssh client: &{IP:192.168.72.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/old-k8s-version-003293/id_rsa Username:docker}
	I0109 00:16:10.515777  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:16:10.534034  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:16:10.534064  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0109 00:16:10.554850  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:16:10.584934  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:16:10.584964  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:16:10.615671  451943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:16:10.637303  451943 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.637339  451943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:16:10.680679  451943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:16:10.830403  451943 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-003293" context rescaled to 1 replicas
	I0109 00:16:10.830449  451943 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.81 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:16:10.832633  451943 out.go:177] * Verifying Kubernetes components...
	I0109 00:16:10.834172  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:16:11.515705  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.515738  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516087  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.516123  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516132  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.516141  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.516151  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.516389  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.516407  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.571488  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.571524  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.571880  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.571890  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.571911  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630216  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075317719s)
	I0109 00:16:11.630282  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630297  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.630308  451943 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.014587881s)
	I0109 00:16:11.630345  451943 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0109 00:16:11.630710  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.630729  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.630740  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.630744  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.630751  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.631004  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.631032  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.631153  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.716276  451943 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.716463  451943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0357366s)
	I0109 00:16:11.716513  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716534  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.716848  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.716869  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.716878  451943 main.go:141] libmachine: Making call to close driver server
	I0109 00:16:11.716889  451943 main.go:141] libmachine: (old-k8s-version-003293) Calling .Close
	I0109 00:16:11.717212  451943 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:16:11.717222  451943 main.go:141] libmachine: (old-k8s-version-003293) DBG | Closing plugin on server side
	I0109 00:16:11.717228  451943 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:16:11.717245  451943 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-003293"
	I0109 00:16:11.719193  451943 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0109 00:16:08.968622  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.470234  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:10.479812  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:12.984384  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:11.720570  451943 addons.go:508] enable addons completed in 1.44432074s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0109 00:16:11.733736  451943 node_ready.go:49] node "old-k8s-version-003293" has status "Ready":"True"
	I0109 00:16:11.733767  451943 node_ready.go:38] duration metric: took 17.451191ms waiting for node "old-k8s-version-003293" to be "Ready" ...
	I0109 00:16:11.733787  451943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:11.750301  451943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:13.762510  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:13.969774  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.468912  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:15.481249  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:17.978744  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:19.979938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:16.257523  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.259142  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.757454  451943 pod_ready.go:102] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:18.469229  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:20.469761  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:22.478368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.978345  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:21.256765  451943 pod_ready.go:92] pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.256797  451943 pod_ready.go:81] duration metric: took 9.506455286s waiting for pod "coredns-5644d7b6d9-8pkqq" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.256807  451943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262633  451943 pod_ready.go:92] pod "kube-proxy-h8br2" in "kube-system" namespace has status "Ready":"True"
	I0109 00:16:21.262651  451943 pod_ready.go:81] duration metric: took 5.836717ms waiting for pod "kube-proxy-h8br2" in "kube-system" namespace to be "Ready" ...
	I0109 00:16:21.262660  451943 pod_ready.go:38] duration metric: took 9.52886361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:16:21.262697  451943 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:16:21.262758  451943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:16:21.280249  451943 api_server.go:72] duration metric: took 10.449767566s to wait for apiserver process to appear ...
	I0109 00:16:21.280282  451943 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:16:21.280305  451943 api_server.go:253] Checking apiserver healthz at https://192.168.72.81:8443/healthz ...
	I0109 00:16:21.286759  451943 api_server.go:279] https://192.168.72.81:8443/healthz returned 200:
	ok
	I0109 00:16:21.287885  451943 api_server.go:141] control plane version: v1.16.0
	I0109 00:16:21.287913  451943 api_server.go:131] duration metric: took 7.622726ms to wait for apiserver health ...
	I0109 00:16:21.287924  451943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:16:21.292745  451943 system_pods.go:59] 4 kube-system pods found
	I0109 00:16:21.292774  451943 system_pods.go:61] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.292782  451943 system_pods.go:61] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.292792  451943 system_pods.go:61] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.292799  451943 system_pods.go:61] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.292809  451943 system_pods.go:74] duration metric: took 4.87707ms to wait for pod list to return data ...
	I0109 00:16:21.292817  451943 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:16:21.295463  451943 default_sa.go:45] found service account: "default"
	I0109 00:16:21.295486  451943 default_sa.go:55] duration metric: took 2.661749ms for default service account to be created ...
	I0109 00:16:21.295495  451943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:16:21.299334  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.299369  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.299379  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.299389  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.299401  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.299419  451943 retry.go:31] will retry after 262.555966ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.567416  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.567444  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.567449  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.567456  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.567461  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.567483  451943 retry.go:31] will retry after 296.862413ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:21.869873  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:21.869910  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:21.869919  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:21.869932  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:21.869939  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:21.869960  451943 retry.go:31] will retry after 354.537219ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.229945  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.229973  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.229978  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.229985  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.229990  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.230008  451943 retry.go:31] will retry after 403.317754ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.639068  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:22.639100  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:22.639106  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:22.639115  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:22.639122  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:22.639145  451943 retry.go:31] will retry after 548.96975ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:23.193832  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:23.193865  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:23.193874  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:23.193884  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:23.193891  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:23.193912  451943 retry.go:31] will retry after 808.39734ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:24.007761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:24.007789  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:24.007794  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:24.007800  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:24.007805  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:24.007826  451943 retry.go:31] will retry after 1.084893616s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:25.097415  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:25.097446  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:25.097452  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:25.097461  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:25.097468  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:25.097488  451943 retry.go:31] will retry after 1.364718688s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:22.471347  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:24.968309  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.968540  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.981321  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:28.981763  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:26.469277  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:26.469302  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:26.469308  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:26.469314  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:26.469319  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:26.469336  451943 retry.go:31] will retry after 1.608197445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.083522  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:28.083549  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:28.083554  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:28.083561  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:28.083566  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:28.083584  451943 retry.go:31] will retry after 1.803084046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:29.892783  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:29.892825  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:29.892834  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:29.892845  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:29.892852  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:29.892878  451943 retry.go:31] will retry after 2.500544298s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:28.970772  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:30.972069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:31.478822  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:33.481537  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:32.406761  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:32.406791  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:32.406796  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:32.406803  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:32.406808  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:32.406826  451943 retry.go:31] will retry after 3.245901502s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:35.657591  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:35.657630  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:35.657636  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:35.657644  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:35.657650  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:35.657669  451943 retry.go:31] will retry after 2.987638992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:33.468927  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.968669  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:35.979914  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:37.982358  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:38.652562  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:38.652589  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:38.652594  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:38.652600  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:38.652605  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:38.652621  451943 retry.go:31] will retry after 5.12035072s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:38.469167  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.469783  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:40.481402  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:42.980559  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:43.778329  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:43.778358  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:43.778363  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:43.778370  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:43.778375  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:43.778392  451943 retry.go:31] will retry after 5.3812896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:42.972242  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.468157  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:45.479217  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:47.978368  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.978994  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.165092  451943 system_pods.go:86] 4 kube-system pods found
	I0109 00:16:49.165124  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:49.165129  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:49.165136  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:49.165142  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:49.165161  451943 retry.go:31] will retry after 8.788078847s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:47.469586  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:49.968667  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.969102  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:51.979785  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:53.984069  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:54.467285  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.469141  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:56.478629  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:58.479207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:16:57.958448  451943 system_pods.go:86] 5 kube-system pods found
	I0109 00:16:57.958475  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:16:57.958481  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Pending
	I0109 00:16:57.958485  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:16:57.958492  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:16:57.958497  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:16:57.958515  451943 retry.go:31] will retry after 8.563711001s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0109 00:16:58.470664  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.970608  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:00.481608  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:02.978829  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:03.468919  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.469064  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:05.482545  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:07.979446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:06.528938  451943 system_pods.go:86] 6 kube-system pods found
	I0109 00:17:06.528963  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:06.528969  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:06.528973  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:06.528977  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:06.528987  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:06.528994  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:06.529016  451943 retry.go:31] will retry after 11.544909303s: missing components: etcd, kube-apiserver
	I0109 00:17:07.969131  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:09.969180  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:10.479061  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.480724  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.978853  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:12.468823  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:14.469027  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:16.968659  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.081528  451943 system_pods.go:86] 8 kube-system pods found
	I0109 00:17:18.081568  451943 system_pods.go:89] "coredns-5644d7b6d9-8pkqq" [17a9c02c-1016-4886-8f49-d1e14b9cb915] Running
	I0109 00:17:18.081576  451943 system_pods.go:89] "etcd-old-k8s-version-003293" [f4516e0b-a960-4dc1-85c3-ae8197ded761] Running
	I0109 00:17:18.081583  451943 system_pods.go:89] "kube-apiserver-old-k8s-version-003293" [c5e83fe4-e95d-47ec-86a4-0615095ef746] Running
	I0109 00:17:18.081590  451943 system_pods.go:89] "kube-controller-manager-old-k8s-version-003293" [7cc16294-f8aa-4a93-b7c8-7abe1b911aea] Running
	I0109 00:17:18.081596  451943 system_pods.go:89] "kube-proxy-h8br2" [69fde48c-e316-4625-8317-93cf921c2380] Running
	I0109 00:17:18.081603  451943 system_pods.go:89] "kube-scheduler-old-k8s-version-003293" [67f0bbb4-b3f5-47ce-b1a2-3e3eab88484b] Running
	I0109 00:17:18.081613  451943 system_pods.go:89] "metrics-server-74d5856cc6-xdjs4" [88b6acd7-0f5c-4358-a202-1d3a6b045b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:17:18.081622  451943 system_pods.go:89] "storage-provisioner" [8a6f9137-5492-4115-9eed-f533c9af1016] Running
	I0109 00:17:18.081636  451943 system_pods.go:126] duration metric: took 56.786133323s to wait for k8s-apps to be running ...
	I0109 00:17:18.081651  451943 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:17:18.081726  451943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:17:18.103798  451943 system_svc.go:56] duration metric: took 22.127635ms WaitForService to wait for kubelet.
	I0109 00:17:18.103844  451943 kubeadm.go:581] duration metric: took 1m7.273361806s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:17:18.103879  451943 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:17:18.107740  451943 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:17:18.107768  451943 node_conditions.go:123] node cpu capacity is 2
	I0109 00:17:18.107803  451943 node_conditions.go:105] duration metric: took 3.918349ms to run NodePressure ...
	I0109 00:17:18.107814  451943 start.go:228] waiting for startup goroutines ...
	I0109 00:17:18.107826  451943 start.go:233] waiting for cluster config update ...
	I0109 00:17:18.107838  451943 start.go:242] writing updated cluster config ...
	I0109 00:17:18.108179  451943 ssh_runner.go:195] Run: rm -f paused
	I0109 00:17:18.161701  451943 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0109 00:17:18.163722  451943 out.go:177] 
	W0109 00:17:18.165269  451943 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0109 00:17:18.166781  451943 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0109 00:17:18.168422  451943 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-003293" cluster and "default" namespace by default
	I0109 00:17:16.980679  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:19.480507  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:18.969475  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.471739  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:21.978721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:24.478734  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:23.968125  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:25.968375  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:26.483938  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:28.979405  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:27.969238  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:29.969349  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.973290  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:31.479085  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:33.978966  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:34.469294  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.967991  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:36.478328  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.481642  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:38.970055  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:41.468509  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:40.978336  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:42.979499  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:44.980394  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:43.471069  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:45.969083  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:47.479177  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:49.483109  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:48.469215  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:50.970448  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:51.979138  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:54.479275  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:53.469152  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:55.470554  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:56.480333  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:58.980818  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:57.968358  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:17:59.968498  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:01.485721  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:03.980131  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:02.468272  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:04.469640  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:06.970010  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:05.981218  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:08.478827  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:09.469651  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:11.970360  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:10.979972  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:12.980174  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:14.470845  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:16.969297  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:15.479585  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:17.979035  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.979874  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:19.471447  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:21.473866  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:22.479239  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:24.979662  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:23.969077  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:26.469232  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:27.480054  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:29.978803  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:28.470397  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:30.968399  451984 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:31.979175  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:33.982180  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:32.467688  451984 pod_ready.go:81] duration metric: took 4m0.007315063s waiting for pod "metrics-server-57f55c9bc5-zg66s" in "kube-system" namespace to be "Ready" ...
	E0109 00:18:32.467715  451984 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:18:32.467724  451984 pod_ready.go:38] duration metric: took 4m2.010477321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:18:32.467740  451984 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:18:32.467770  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:32.467841  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:32.540539  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.540568  451984 cri.go:89] found id: ""
	I0109 00:18:32.540578  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:32.540633  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.547617  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:32.547712  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:32.593446  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:32.593548  451984 cri.go:89] found id: ""
	I0109 00:18:32.593566  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:32.593622  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.598538  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:32.598630  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:32.641182  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.641217  451984 cri.go:89] found id: ""
	I0109 00:18:32.641227  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:32.641281  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.645529  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:32.645610  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:32.687187  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:32.687222  451984 cri.go:89] found id: ""
	I0109 00:18:32.687233  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:32.687299  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.691477  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:32.691551  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:32.730800  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:32.730834  451984 cri.go:89] found id: ""
	I0109 00:18:32.730853  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:32.730914  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.735372  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:32.735458  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:32.779326  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:32.779355  451984 cri.go:89] found id: ""
	I0109 00:18:32.779384  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:32.779528  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.784366  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:32.784444  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:32.825533  451984 cri.go:89] found id: ""
	I0109 00:18:32.825566  451984 logs.go:284] 0 containers: []
	W0109 00:18:32.825577  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:32.825586  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:32.825657  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:32.871429  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:32.871465  451984 cri.go:89] found id: ""
	I0109 00:18:32.871478  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:32.871546  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:32.876454  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:32.876483  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:32.931470  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:32.931518  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:32.976305  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:32.976344  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:33.421205  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:33.421256  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:33.436706  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:33.436752  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:33.605332  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:33.605369  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:33.653704  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:33.653746  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:33.697440  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:33.697489  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:33.753681  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:33.753728  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:33.798230  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:33.798271  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:33.862054  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:33.862089  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:33.942360  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:33.942549  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:33.965458  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:33.965503  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:34.012430  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012465  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:34.012554  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:34.012575  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:34.012583  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:34.012590  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:34.012596  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:36.480501  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:38.979625  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:41.480903  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:43.978879  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:44.014441  451984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:18:44.031831  451984 api_server.go:72] duration metric: took 4m15.676282348s to wait for apiserver process to appear ...
	I0109 00:18:44.031865  451984 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:18:44.031906  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:44.031966  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:44.077138  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.077163  451984 cri.go:89] found id: ""
	I0109 00:18:44.077172  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:44.077232  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.081831  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:44.081906  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:44.121451  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.121474  451984 cri.go:89] found id: ""
	I0109 00:18:44.121482  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:44.121535  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.126070  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:44.126158  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:44.170657  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:44.170690  451984 cri.go:89] found id: ""
	I0109 00:18:44.170699  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:44.170753  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.175896  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:44.175977  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:44.220851  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.220877  451984 cri.go:89] found id: ""
	I0109 00:18:44.220886  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:44.220937  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.225006  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:44.225094  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:44.270073  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.270107  451984 cri.go:89] found id: ""
	I0109 00:18:44.270118  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:44.270188  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.275153  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:44.275245  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:44.318077  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:44.318111  451984 cri.go:89] found id: ""
	I0109 00:18:44.318122  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:44.318201  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.322475  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:44.322560  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:44.361736  451984 cri.go:89] found id: ""
	I0109 00:18:44.361773  451984 logs.go:284] 0 containers: []
	W0109 00:18:44.361784  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:44.361792  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:44.361864  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:44.404699  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:44.404726  451984 cri.go:89] found id: ""
	I0109 00:18:44.404737  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:44.404803  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:44.408753  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:44.408777  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:44.455119  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:44.455162  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:44.497680  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:44.497721  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:44.548809  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:44.548841  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:44.628959  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:44.629159  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:44.651315  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:44.651388  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:44.666013  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:44.666055  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:44.716269  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:44.716317  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:44.762681  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:44.762720  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:45.136682  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:45.136743  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:45.274971  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:45.275023  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:45.323164  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:45.323208  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:45.383823  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:45.383881  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:45.428483  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428516  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:45.428571  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:45.428579  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:45.428588  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:45.428601  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:45.428608  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:45.980484  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:48.483446  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:50.980210  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:53.480495  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:55.429277  451984 api_server.go:253] Checking apiserver healthz at https://192.168.50.132:8443/healthz ...
	I0109 00:18:55.436812  451984 api_server.go:279] https://192.168.50.132:8443/healthz returned 200:
	ok
	I0109 00:18:55.438287  451984 api_server.go:141] control plane version: v1.28.4
	I0109 00:18:55.438316  451984 api_server.go:131] duration metric: took 11.40644287s to wait for apiserver health ...
	I0109 00:18:55.438327  451984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:18:55.438359  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:18:55.438433  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:18:55.485627  451984 cri.go:89] found id: "a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:55.485654  451984 cri.go:89] found id: ""
	I0109 00:18:55.485664  451984 logs.go:284] 1 containers: [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9]
	I0109 00:18:55.485732  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.490219  451984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:18:55.490296  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:18:55.531890  451984 cri.go:89] found id: "004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:55.531920  451984 cri.go:89] found id: ""
	I0109 00:18:55.531930  451984 logs.go:284] 1 containers: [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773]
	I0109 00:18:55.532002  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.536651  451984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:18:55.536724  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:18:55.579859  451984 cri.go:89] found id: "deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:55.579909  451984 cri.go:89] found id: ""
	I0109 00:18:55.579921  451984 logs.go:284] 1 containers: [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757]
	I0109 00:18:55.579981  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.584894  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:18:55.584970  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:18:55.626833  451984 cri.go:89] found id: "e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:55.626861  451984 cri.go:89] found id: ""
	I0109 00:18:55.626871  451984 logs.go:284] 1 containers: [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb]
	I0109 00:18:55.626940  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.631334  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:18:55.631449  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:18:55.675805  451984 cri.go:89] found id: "6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:55.675831  451984 cri.go:89] found id: ""
	I0109 00:18:55.675843  451984 logs.go:284] 1 containers: [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247]
	I0109 00:18:55.675907  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.680727  451984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:18:55.680805  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:18:55.734757  451984 cri.go:89] found id: "3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:55.734788  451984 cri.go:89] found id: ""
	I0109 00:18:55.734799  451984 logs.go:284] 1 containers: [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2]
	I0109 00:18:55.734867  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.739390  451984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:18:55.739464  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:18:55.785683  451984 cri.go:89] found id: ""
	I0109 00:18:55.785720  451984 logs.go:284] 0 containers: []
	W0109 00:18:55.785733  451984 logs.go:286] No container was found matching "kindnet"
	I0109 00:18:55.785741  451984 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:18:55.785815  451984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:18:55.839983  451984 cri.go:89] found id: "cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:55.840010  451984 cri.go:89] found id: ""
	I0109 00:18:55.840018  451984 logs.go:284] 1 containers: [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c]
	I0109 00:18:55.840066  451984 ssh_runner.go:195] Run: which crictl
	I0109 00:18:55.844870  451984 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:18:55.844897  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:18:55.979554  451984 logs.go:123] Gathering logs for coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] ...
	I0109 00:18:55.979600  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757"
	I0109 00:18:56.023796  451984 logs.go:123] Gathering logs for kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] ...
	I0109 00:18:56.023840  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb"
	I0109 00:18:56.070463  451984 logs.go:123] Gathering logs for kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] ...
	I0109 00:18:56.070512  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247"
	I0109 00:18:56.116109  451984 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:18:56.116142  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:18:56.505693  451984 logs.go:123] Gathering logs for container status ...
	I0109 00:18:56.505742  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:18:56.566638  451984 logs.go:123] Gathering logs for kubelet ...
	I0109 00:18:56.566683  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:18:56.649199  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.649372  451984 logs.go:138] Found kubelet problem: Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.670766  451984 logs.go:123] Gathering logs for kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] ...
	I0109 00:18:56.670809  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9"
	I0109 00:18:56.719532  451984 logs.go:123] Gathering logs for etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] ...
	I0109 00:18:56.719574  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773"
	I0109 00:18:56.763714  451984 logs.go:123] Gathering logs for kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] ...
	I0109 00:18:56.763758  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2"
	I0109 00:18:56.825271  451984 logs.go:123] Gathering logs for storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] ...
	I0109 00:18:56.825324  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c"
	I0109 00:18:56.869669  451984 logs.go:123] Gathering logs for dmesg ...
	I0109 00:18:56.869717  451984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:18:56.890240  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890274  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:18:56.890355  451984 out.go:239] X Problems detected in kubelet:
	W0109 00:18:56.890385  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: W0109 00:14:27.737298    3798 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	W0109 00:18:56.890395  451984 out.go:239]   Jan 09 00:14:27 embed-certs-845373 kubelet[3798]: E0109 00:14:27.737344    3798 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-845373" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-845373' and this object
	I0109 00:18:56.890406  451984 out.go:309] Setting ErrFile to fd 2...
	I0109 00:18:56.890415  451984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:18:55.481178  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:57.979207  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:18:59.980319  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:02.478816  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:04.478919  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:06.899277  451984 system_pods.go:59] 8 kube-system pods found
	I0109 00:19:06.899321  451984 system_pods.go:61] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.899329  451984 system_pods.go:61] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.899334  451984 system_pods.go:61] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.899338  451984 system_pods.go:61] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.899348  451984 system_pods.go:61] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.899352  451984 system_pods.go:61] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.899383  451984 system_pods.go:61] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.899395  451984 system_pods.go:61] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.899414  451984 system_pods.go:74] duration metric: took 11.461075857s to wait for pod list to return data ...
	I0109 00:19:06.899429  451984 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:19:06.903404  451984 default_sa.go:45] found service account: "default"
	I0109 00:19:06.903436  451984 default_sa.go:55] duration metric: took 3.995992ms for default service account to be created ...
	I0109 00:19:06.903448  451984 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:19:06.910497  451984 system_pods.go:86] 8 kube-system pods found
	I0109 00:19:06.910523  451984 system_pods.go:89] "coredns-5dd5756b68-j5mzp" [79554198-e2ef-48e1-b6e3-fc3ea068778e] Running
	I0109 00:19:06.910528  451984 system_pods.go:89] "etcd-embed-certs-845373" [dddf22d1-9f04-470f-9228-b4de90e5d496] Running
	I0109 00:19:06.910533  451984 system_pods.go:89] "kube-apiserver-embed-certs-845373" [d91721f5-3162-4cfa-b930-e2875d732a43] Running
	I0109 00:19:06.910537  451984 system_pods.go:89] "kube-controller-manager-embed-certs-845373" [b9f9aa25-0641-44cc-b53d-67cacbc57166] Running
	I0109 00:19:06.910541  451984 system_pods.go:89] "kube-proxy-nxtn2" [4bb69868-6675-4dc0-80c1-b3b2dc0ba6df] Running
	I0109 00:19:06.910545  451984 system_pods.go:89] "kube-scheduler-embed-certs-845373" [820a2cef-802c-4ad9-adb4-dd03a28c4852] Running
	I0109 00:19:06.910553  451984 system_pods.go:89] "metrics-server-57f55c9bc5-zg66s" [0052e55b-f5ad-4aea-9568-9a5f99033dc3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:19:06.910558  451984 system_pods.go:89] "storage-provisioner" [19e4933d-98fd-4607-bc51-e8e2ff8b65bb] Running
	I0109 00:19:06.910564  451984 system_pods.go:126] duration metric: took 7.110675ms to wait for k8s-apps to be running ...
	I0109 00:19:06.910571  451984 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:19:06.910616  451984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:19:06.927621  451984 system_svc.go:56] duration metric: took 17.036468ms WaitForService to wait for kubelet.
	I0109 00:19:06.927654  451984 kubeadm.go:581] duration metric: took 4m38.572113328s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:19:06.927677  451984 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:19:06.931040  451984 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:19:06.931071  451984 node_conditions.go:123] node cpu capacity is 2
	I0109 00:19:06.931083  451984 node_conditions.go:105] duration metric: took 3.401351ms to run NodePressure ...
	I0109 00:19:06.931095  451984 start.go:228] waiting for startup goroutines ...
	I0109 00:19:06.931101  451984 start.go:233] waiting for cluster config update ...
	I0109 00:19:06.931113  451984 start.go:242] writing updated cluster config ...
	I0109 00:19:06.931454  451984 ssh_runner.go:195] Run: rm -f paused
	I0109 00:19:06.989366  451984 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:19:06.991673  451984 out.go:177] * Done! kubectl is now configured to use "embed-certs-845373" cluster and "default" namespace by default
	I0109 00:19:06.479508  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:08.978313  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:11.482400  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:13.979056  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:16.480908  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:18.481024  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:20.482252  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:22.978703  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:24.979574  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:26.979620  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:29.478426  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:31.478540  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:33.478901  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:35.978875  452237 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace has status "Ready":"False"
	I0109 00:19:36.471149  452237 pod_ready.go:81] duration metric: took 4m0.000060952s waiting for pod "metrics-server-57f55c9bc5-k426v" in "kube-system" namespace to be "Ready" ...
	E0109 00:19:36.471203  452237 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0109 00:19:36.471221  452237 pod_ready.go:38] duration metric: took 4m3.426617855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:19:36.471243  452237 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:19:36.471314  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:36.471400  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:36.539330  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:36.539370  452237 cri.go:89] found id: ""
	I0109 00:19:36.539383  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:36.539446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.544259  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:36.544339  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:36.591395  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:36.591437  452237 cri.go:89] found id: ""
	I0109 00:19:36.591448  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:36.591520  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.596454  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:36.596523  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:36.641041  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:36.641070  452237 cri.go:89] found id: ""
	I0109 00:19:36.641082  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:36.641145  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.645716  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:36.645798  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:36.686577  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:36.686607  452237 cri.go:89] found id: ""
	I0109 00:19:36.686618  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:36.686686  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.690744  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:36.690824  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:36.733504  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:36.733534  452237 cri.go:89] found id: ""
	I0109 00:19:36.733544  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:36.733613  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.738581  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:36.738663  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:36.783280  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:36.783314  452237 cri.go:89] found id: ""
	I0109 00:19:36.783326  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:36.783419  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.788101  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:36.788171  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:36.839094  452237 cri.go:89] found id: ""
	I0109 00:19:36.839124  452237 logs.go:284] 0 containers: []
	W0109 00:19:36.839133  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:36.839139  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:36.839201  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:36.880203  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:36.880236  452237 cri.go:89] found id: ""
	I0109 00:19:36.880247  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:36.880329  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:36.884703  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:36.884732  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:36.900132  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:36.900175  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:37.044558  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:37.044596  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:37.090555  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:37.090601  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:37.550107  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:37.550164  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:37.608267  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:37.608316  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:37.689186  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:37.689447  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:37.712896  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:37.712958  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:37.766035  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:37.766078  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:37.814072  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:37.814111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:37.858686  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:37.858725  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:37.912616  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:37.912661  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:37.973080  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:37.973129  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:38.016941  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.016989  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:38.017072  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:38.017088  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:38.017101  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:38.017118  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:38.017128  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:48.018753  452237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:19:48.040302  452237 api_server.go:72] duration metric: took 4m15.967717255s to wait for apiserver process to appear ...
	I0109 00:19:48.040335  452237 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:19:48.040382  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:48.040539  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:48.105058  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.105084  452237 cri.go:89] found id: ""
	I0109 00:19:48.105095  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:48.105158  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.110067  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:48.110165  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:48.153350  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.153383  452237 cri.go:89] found id: ""
	I0109 00:19:48.153394  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:48.153464  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.158284  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:48.158355  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:48.205447  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.205480  452237 cri.go:89] found id: ""
	I0109 00:19:48.205492  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:48.205572  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.210254  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:48.210353  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:48.253594  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:48.253624  452237 cri.go:89] found id: ""
	I0109 00:19:48.253633  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:48.253700  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.259160  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:48.259229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:48.302358  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.302383  452237 cri.go:89] found id: ""
	I0109 00:19:48.302393  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:48.302446  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.308134  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:48.308229  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:48.349632  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:48.349656  452237 cri.go:89] found id: ""
	I0109 00:19:48.349664  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:48.349715  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.354626  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:48.354693  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:48.400501  452237 cri.go:89] found id: ""
	I0109 00:19:48.400535  452237 logs.go:284] 0 containers: []
	W0109 00:19:48.400547  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:48.400555  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:48.400626  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:48.444607  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:48.444631  452237 cri.go:89] found id: ""
	I0109 00:19:48.444641  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:48.444710  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:48.448965  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:19:48.449000  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:19:48.496050  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:19:48.496085  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:19:48.620778  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:19:48.620812  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:48.688155  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:19:48.688204  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:48.745755  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:19:48.745792  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:48.786141  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:19:48.786195  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:48.833422  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:19:48.833456  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:19:49.231467  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:49.231508  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:49.315139  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.315313  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.337901  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:19:49.337942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:19:49.353452  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:19:49.353494  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:49.409069  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:19:49.409111  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:49.466267  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:19:49.466311  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:49.512720  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512762  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:19:49.512838  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:19:49.512858  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:49.512868  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:19:49.512882  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:19:49.512891  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:19:59.513828  452237 api_server.go:253] Checking apiserver healthz at https://192.168.61.62:8443/healthz ...
	I0109 00:19:59.518896  452237 api_server.go:279] https://192.168.61.62:8443/healthz returned 200:
	ok
	I0109 00:19:59.520439  452237 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:19:59.520463  452237 api_server.go:131] duration metric: took 11.480122148s to wait for apiserver health ...
	I0109 00:19:59.520479  452237 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:19:59.520504  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:19:59.520549  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:19:59.566636  452237 cri.go:89] found id: "31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:19:59.566669  452237 cri.go:89] found id: ""
	I0109 00:19:59.566680  452237 logs.go:284] 1 containers: [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b]
	I0109 00:19:59.566773  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.570754  452237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:19:59.570817  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:19:59.612286  452237 cri.go:89] found id: "3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:19:59.612314  452237 cri.go:89] found id: ""
	I0109 00:19:59.612326  452237 logs.go:284] 1 containers: [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd]
	I0109 00:19:59.612399  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.618705  452237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:19:59.618778  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:19:59.666381  452237 cri.go:89] found id: "16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:19:59.666408  452237 cri.go:89] found id: ""
	I0109 00:19:59.666417  452237 logs.go:284] 1 containers: [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8]
	I0109 00:19:59.666468  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.672155  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:19:59.672242  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:19:59.712973  452237 cri.go:89] found id: "6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:19:59.712997  452237 cri.go:89] found id: ""
	I0109 00:19:59.713005  452237 logs.go:284] 1 containers: [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a]
	I0109 00:19:59.713068  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.717181  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:19:59.717261  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:19:59.762121  452237 cri.go:89] found id: "577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:19:59.762153  452237 cri.go:89] found id: ""
	I0109 00:19:59.762163  452237 logs.go:284] 1 containers: [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b]
	I0109 00:19:59.762236  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.766573  452237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:19:59.766630  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:19:59.812202  452237 cri.go:89] found id: "315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:19:59.812233  452237 cri.go:89] found id: ""
	I0109 00:19:59.812246  452237 logs.go:284] 1 containers: [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24]
	I0109 00:19:59.812309  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.817529  452237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:19:59.817615  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:19:59.865373  452237 cri.go:89] found id: ""
	I0109 00:19:59.865402  452237 logs.go:284] 0 containers: []
	W0109 00:19:59.865410  452237 logs.go:286] No container was found matching "kindnet"
	I0109 00:19:59.865417  452237 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0109 00:19:59.865486  452237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0109 00:19:59.914250  452237 cri.go:89] found id: "9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:19:59.914273  452237 cri.go:89] found id: ""
	I0109 00:19:59.914283  452237 logs.go:284] 1 containers: [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62]
	I0109 00:19:59.914369  452237 ssh_runner.go:195] Run: which crictl
	I0109 00:19:59.918360  452237 logs.go:123] Gathering logs for kubelet ...
	I0109 00:19:59.918391  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:19:59.999676  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:19:59.999875  452237 logs.go:138] Found kubelet problem: Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.022457  452237 logs.go:123] Gathering logs for kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] ...
	I0109 00:20:00.022496  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a"
	I0109 00:20:00.082902  452237 logs.go:123] Gathering logs for kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] ...
	I0109 00:20:00.082942  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b"
	I0109 00:20:00.127886  452237 logs.go:123] Gathering logs for storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] ...
	I0109 00:20:00.127933  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62"
	I0109 00:20:00.168705  452237 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:20:00.168737  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:20:00.554704  452237 logs.go:123] Gathering logs for container status ...
	I0109 00:20:00.554751  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:20:00.604427  452237 logs.go:123] Gathering logs for dmesg ...
	I0109 00:20:00.604462  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:20:00.618923  452237 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:20:00.618954  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:20:00.747443  452237 logs.go:123] Gathering logs for kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] ...
	I0109 00:20:00.747475  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b"
	I0109 00:20:00.802652  452237 logs.go:123] Gathering logs for etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] ...
	I0109 00:20:00.802691  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd"
	I0109 00:20:00.849279  452237 logs.go:123] Gathering logs for coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] ...
	I0109 00:20:00.849318  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8"
	I0109 00:20:00.887879  452237 logs.go:123] Gathering logs for kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] ...
	I0109 00:20:00.887919  452237 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24"
	I0109 00:20:00.951894  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.951928  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:20:00.951999  452237 out.go:239] X Problems detected in kubelet:
	W0109 00:20:00.952011  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: W0109 00:15:32.352656    4312 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	W0109 00:20:00.952019  452237 out.go:239]   Jan 09 00:15:32 no-preload-378213 kubelet[4312]: E0109 00:15:32.352698    4312 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-378213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-378213' and this object
	I0109 00:20:00.952030  452237 out.go:309] Setting ErrFile to fd 2...
	I0109 00:20:00.952035  452237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:20:10.962675  452237 system_pods.go:59] 8 kube-system pods found
	I0109 00:20:10.962706  452237 system_pods.go:61] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.962711  452237 system_pods.go:61] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.962716  452237 system_pods.go:61] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.962721  452237 system_pods.go:61] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.962725  452237 system_pods.go:61] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.962729  452237 system_pods.go:61] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.962735  452237 system_pods.go:61] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.962740  452237 system_pods.go:61] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.962747  452237 system_pods.go:74] duration metric: took 11.442261888s to wait for pod list to return data ...
	I0109 00:20:10.962755  452237 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:20:10.965782  452237 default_sa.go:45] found service account: "default"
	I0109 00:20:10.965808  452237 default_sa.go:55] duration metric: took 3.046646ms for default service account to be created ...
	I0109 00:20:10.965817  452237 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:20:10.972286  452237 system_pods.go:86] 8 kube-system pods found
	I0109 00:20:10.972323  452237 system_pods.go:89] "coredns-76f75df574-ztvgr" [9dca02e6-8b8c-491f-a689-fb9b51c5f88e] Running
	I0109 00:20:10.972331  452237 system_pods.go:89] "etcd-no-preload-378213" [f10240c3-24a8-4973-8567-078f76cb7258] Running
	I0109 00:20:10.972340  452237 system_pods.go:89] "kube-apiserver-no-preload-378213" [508be6e9-3556-48ef-a5a4-6ed6dae76375] Running
	I0109 00:20:10.972349  452237 system_pods.go:89] "kube-controller-manager-no-preload-378213" [8ff18e72-1b74-4586-ab09-f1dada5d3d75] Running
	I0109 00:20:10.972356  452237 system_pods.go:89] "kube-proxy-4vnf5" [1a87e8a6-55b5-4579-aa4e-1a20be126ba2] Running
	I0109 00:20:10.972366  452237 system_pods.go:89] "kube-scheduler-no-preload-378213" [c232bbac-828a-4c9a-858b-38ed25270dbc] Running
	I0109 00:20:10.972381  452237 system_pods.go:89] "metrics-server-57f55c9bc5-k426v" [ccc02dbd-f70f-46d3-b39d-0fef97bfa04e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0109 00:20:10.972392  452237 system_pods.go:89] "storage-provisioner" [95fe5038-977e-430a-8bda-42557c536114] Running
	I0109 00:20:10.972406  452237 system_pods.go:126] duration metric: took 6.583119ms to wait for k8s-apps to be running ...
	I0109 00:20:10.972427  452237 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:20:10.972490  452237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:20:10.992310  452237 system_svc.go:56] duration metric: took 19.873367ms WaitForService to wait for kubelet.
	I0109 00:20:10.992340  452237 kubeadm.go:581] duration metric: took 4m38.919766965s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:20:10.992363  452237 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:20:10.996337  452237 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:20:10.996373  452237 node_conditions.go:123] node cpu capacity is 2
	I0109 00:20:10.996390  452237 node_conditions.go:105] duration metric: took 4.019869ms to run NodePressure ...
	I0109 00:20:10.996405  452237 start.go:228] waiting for startup goroutines ...
	I0109 00:20:10.996414  452237 start.go:233] waiting for cluster config update ...
	I0109 00:20:10.996429  452237 start.go:242] writing updated cluster config ...
	I0109 00:20:10.996742  452237 ssh_runner.go:195] Run: rm -f paused
	I0109 00:20:11.052916  452237 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0109 00:20:11.055339  452237 out.go:177] * Done! kubectl is now configured to use "no-preload-378213" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:10:06 UTC, ends at Tue 2024-01-09 00:29:08 UTC. --
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.748964601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0946b927-2c07-4c04-9802-2cf18f4df38f name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.751354340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=35ad8e6e-b9b6-4d69-a6ea-00baadd920f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.751874261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760148751762454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=35ad8e6e-b9b6-4d69-a6ea-00baadd920f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.752668507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1803628e-e4f5-4a69-81d6-a502be854f58 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.752715738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1803628e-e4f5-4a69-81d6-a502be854f58 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.752978705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1803628e-e4f5-4a69-81d6-a502be854f58 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.766087079Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=a998e9c7-48d3-4168-8556-80aa304ffb9c name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.766368107Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8d90b326de0eab92db1347033a0717c9bdd5d6679e330808e80314a65fc5a4fe,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-xdjs4,Uid:88b6acd7-0f5c-4358-a202-1d3a6b045b77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759372984291485,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-xdjs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b6acd7-0f5c-4358-a202-1d3a6b045b77,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:16:12.627274643Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-8pkqq,Uid:17a9c02c-1016-4886-8f49-d1e14
b9cb915,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759372323270180,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:16:11.069381326Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8a6f9137-5492-4115-9eed-f533c9af1016,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759371999527937,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f
533c9af1016,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-09T00:16:11.652346701Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&PodSandboxMetadata{Name:kube-proxy-h8br2,Uid:69fde48c-e316-4625-831
7-93cf921c2380,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759370141096288,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde48c-e316-4625-8317-93cf921c2380,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:16:09.789090911Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-003293,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759342721219986,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca
1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-09T00:15:42.18938153Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-003293,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759342708304446,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-09T00:15:42.189379914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id
:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-003293,Uid:100910ea2692f1e03d189e20d9f20750,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759342654502456,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 100910ea2692f1e03d189e20d9f20750,kubernetes.io/config.seen: 2024-01-09T00:15:42.189371534Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-003293,Uid:9065e7a4794c902f87c467d8e60abdab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759036223005163,Labels:map[string]string{component: kube-apiserver,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9065e7a4794c902f87c467d8e60abdab,kubernetes.io/config.seen: 2024-01-09T00:10:35.740736191Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=a998e9c7-48d3-4168-8556-80aa304ffb9c name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.767398241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2701654c-84ac-4c33-8fd4-34c015ba4d50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.767463354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2701654c-84ac-4c33-8fd4-34c015ba4d50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.767750701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2701654c-84ac-4c33-8fd4-34c015ba4d50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.802117452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e663f1e5-5220-44d8-aa06-464c3cc3d74d name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.802217635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e663f1e5-5220-44d8-aa06-464c3cc3d74d name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.809131999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e7ef6fef-1848-488e-b626-26404f984567 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.809685588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760148809670487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=e7ef6fef-1848-488e-b626-26404f984567 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.810932449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cde94d57-bb66-4a0a-ba15-d69614af70bc name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.811083722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cde94d57-bb66-4a0a-ba15-d69614af70bc name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.811421220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cde94d57-bb66-4a0a-ba15-d69614af70bc name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.853495935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5fa2b1e3-a273-4917-8fd1-114818993425 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.853577960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5fa2b1e3-a273-4917-8fd1-114818993425 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.855116782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a254ec0a-a639-4fdd-9d7a-d79d2a4ffaf1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.855588826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760148855563417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=a254ec0a-a639-4fdd-9d7a-d79d2a4ffaf1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.856456472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b97ea4f7-ad0a-4580-92a4-ea00cedd4641 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.856530282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b97ea4f7-ad0a-4580-92a4-ea00cedd4641 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:08 old-k8s-version-003293 crio[729]: time="2024-01-09 00:29:08.856752263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a,PodSandboxId:37b29a7d3bfe3c575f4d784fd64868a9ee27ab39df476f24e7ca0ed81631389c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759372943626753,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a6f9137-5492-4115-9eed-f533c9af1016,},Annotations:map[string]string{io.kubernetes.container.hash: 48601650,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a,PodSandboxId:54d7cb7dd30a2c6661db5f94f623f188f812a61202ee74ab8fab2cd267630dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704759372555929477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-8pkqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17a9c02c-1016-4886-8f49-d1e14b9cb915,},Annotations:map[string]string{io.kubernetes.container.hash: 558e6395,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d,PodSandboxId:fdfcaed558b5f2d5bf12b0c68e1ee40e7303bf3fe0feba7efcddec18e6077240,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704759371790629672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8br2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69fde
48c-e316-4625-8317-93cf921c2380,},Annotations:map[string]string{io.kubernetes.container.hash: 3a857b28,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232,PodSandboxId:0f4694eb54e11a5528310e144126ae94ec595aa5046b5bdb1a6c28d1267e98ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704759344482421795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 100910ea2692f1e03d189e20d9f20750,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 7d132bee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47,PodSandboxId:089b0c01eba48cd4f79070a8020abc52da2ab5535fc43f8ee5632571a6898ff1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704759343651576236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4,PodSandboxId:6b4c05a9ceacd459239840ce7352c4f50c1be443c07b2736cfb420b25c31420e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704759343215883113,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704759342648132721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682,PodSandboxId:33a3e2bd44491d093f26bb3e606d25c94bfacad4074320d66e155e67c0e5df2a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1704759036867542500,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-003293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9065e7a4794c902f87c467d8e60abdab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 842e48fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b97ea4f7-ad0a-4580-92a4-ea00cedd4641 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e25cd2c892d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   37b29a7d3bfe3       storage-provisioner
	17dc6ef75c618       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   54d7cb7dd30a2       coredns-5644d7b6d9-8pkqq
	901108dc95db4       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   fdfcaed558b5f       kube-proxy-h8br2
	9435012e8152c       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   0f4694eb54e11       etcd-old-k8s-version-003293
	5374a9cceed08       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   089b0c01eba48       kube-scheduler-old-k8s-version-003293
	ef679dd71c7bb       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   6b4c05a9ceacd       kube-controller-manager-old-k8s-version-003293
	bfc228c8d35e5       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            1                   33a3e2bd44491       kube-apiserver-old-k8s-version-003293
	545c3df0e504b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   18 minutes ago      Exited              kube-apiserver            0                   33a3e2bd44491       kube-apiserver-old-k8s-version-003293
	
	
	==> coredns [17dc6ef75c6185e93a5f6746e779d9f9301702306ba729889486fe54705cf08a] <==
	.:53
	2024-01-09T00:16:12.824Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2024-01-09T00:16:12.824Z [INFO] CoreDNS-1.6.2
	2024-01-09T00:16:12.824Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-09T00:16:13.858Z [INFO] 127.0.0.1:33319 - 41703 "HINFO IN 8110508765458628312.3799816984617018093. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034938749s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-003293
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-003293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=old-k8s-version-003293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_15_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:28:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:28:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:28:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:28:49 +0000   Tue, 09 Jan 2024 00:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.81
	  Hostname:    old-k8s-version-003293
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 49f6c1a6bb44454db83294b9ea7b39ff
	 System UUID:                49f6c1a6-bb44-454d-b832-94b9ea7b39ff
	 Boot ID:                    f192b0ec-7f75-483f-b3ee-d655d1b3cb77
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-8pkqq                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-003293                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-003293             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-003293    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-h8br2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-003293             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-xdjs4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-003293     Node old-k8s-version-003293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet, old-k8s-version-003293     Node old-k8s-version-003293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x8 over 13m)  kubelet, old-k8s-version-003293     Node old-k8s-version-003293 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-003293  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 9 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077324] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan 9 00:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.539993] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145362] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.539596] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000010] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.943318] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.130020] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.178463] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.123133] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.240760] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +20.081351] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.450304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 9 00:11] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 9 00:15] systemd-fstab-generator[3100]: Ignoring "noauto" for root device
	[  +1.777811] kauditd_printk_skb: 8 callbacks suppressed
	[Jan 9 00:16] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9435012e8152c313ca88aa72ec4b33f989906d1c677b1fa09c86107bcc166232] <==
	2024-01-09 00:15:44.761581 I | raft: c388cf4f1b00fa7 became follower at term 0
	2024-01-09 00:15:44.761602 I | raft: newRaft c388cf4f1b00fa7 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-09 00:15:44.761617 I | raft: c388cf4f1b00fa7 became follower at term 1
	2024-01-09 00:15:44.772274 W | auth: simple token is not cryptographically signed
	2024-01-09 00:15:44.778613 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-09 00:15:44.781956 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-09 00:15:44.782186 I | embed: listening for metrics on http://192.168.72.81:2381
	2024-01-09 00:15:44.782498 I | etcdserver: c388cf4f1b00fa7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-09 00:15:44.783225 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-09 00:15:44.783611 I | etcdserver/membership: added member c388cf4f1b00fa7 [https://192.168.72.81:2380] to cluster eabbb2578081711e
	2024-01-09 00:15:45.562126 I | raft: c388cf4f1b00fa7 is starting a new election at term 1
	2024-01-09 00:15:45.562158 I | raft: c388cf4f1b00fa7 became candidate at term 2
	2024-01-09 00:15:45.562168 I | raft: c388cf4f1b00fa7 received MsgVoteResp from c388cf4f1b00fa7 at term 2
	2024-01-09 00:15:45.562177 I | raft: c388cf4f1b00fa7 became leader at term 2
	2024-01-09 00:15:45.562182 I | raft: raft.node: c388cf4f1b00fa7 elected leader c388cf4f1b00fa7 at term 2
	2024-01-09 00:15:45.563103 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-09 00:15:45.563168 I | embed: ready to serve client requests
	2024-01-09 00:15:45.563331 I | etcdserver: published {Name:old-k8s-version-003293 ClientURLs:[https://192.168.72.81:2379]} to cluster eabbb2578081711e
	2024-01-09 00:15:45.563975 I | embed: ready to serve client requests
	2024-01-09 00:15:45.565136 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-09 00:15:45.566994 I | embed: serving client requests on 192.168.72.81:2379
	2024-01-09 00:15:45.567975 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-09 00:15:45.568044 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-09 00:25:45.582106 I | mvcc: store.index: compact 668
	2024-01-09 00:25:45.586069 I | mvcc: finished scheduled compaction at 668 (took 3.306474ms)
	
	
	==> kernel <==
	 00:29:09 up 19 min,  0 users,  load average: 0.24, 0.16, 0.16
	Linux old-k8s-version-003293 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [545c3df0e504b3d82b0d13be5bc90a1556f75a529bdcb61ae78cb14ac8b49682] <==
	W0109 00:15:38.631617       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.632209       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.632484       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.633343       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.633398       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.633598       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.634940       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635140       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635281       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635322       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635762       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635830       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.635942       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.636499       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.636593       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.636883       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.637023       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.637198       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.637284       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638173       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638231       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638254       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:38.638286       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:39.912156       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0109 00:15:39.920146       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [bfc228c8d35e5f0632b6b852b0b8218dda44875200c49a378442a5151cee6b63] <==
	I0109 00:21:49.945391       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:21:49.945534       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:21:49.945602       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:21:49.945620       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:23:49.946034       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:23:49.946433       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:23:49.946529       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:23:49.946564       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:25:49.948451       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:25:49.948575       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:25:49.948666       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:25:49.948674       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:26:49.949173       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:26:49.949287       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:26:49.949379       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:26:49.949389       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:28:49.949676       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0109 00:28:49.949896       1 handler_proxy.go:99] no RequestInfo found in the context
	E0109 00:28:49.949984       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:28:49.949996       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ef679dd71c7bb3d60fe3ea767e6e7029f591df0cfc33d84dcd3c583c877a42e4] <==
	E0109 00:22:43.273996       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:23:06.072770       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:23:13.526717       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:23:38.075267       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:23:43.779405       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:24:10.077745       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:24:14.031355       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:24:42.080103       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:24:44.283281       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:25:14.082358       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:25:14.535234       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0109 00:25:44.786988       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:25:46.084446       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:26:15.039013       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:26:18.086913       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:26:45.291769       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:26:50.088680       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:27:15.544455       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:27:22.090597       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:27:45.796546       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:27:54.092441       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:28:16.049299       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:28:26.094546       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0109 00:28:46.301646       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0109 00:28:58.096941       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [901108dc95db495ed7dd22c21e81ee5f51cdbeec8eb7c414b27e5117dc99c67d] <==
	W0109 00:16:12.407459       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0109 00:16:12.418543       1 node.go:135] Successfully retrieved node IP: 192.168.72.81
	I0109 00:16:12.418619       1 server_others.go:149] Using iptables Proxier.
	I0109 00:16:12.419006       1 server.go:529] Version: v1.16.0
	I0109 00:16:12.425199       1 config.go:313] Starting service config controller
	I0109 00:16:12.425290       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0109 00:16:12.425338       1 config.go:131] Starting endpoints config controller
	I0109 00:16:12.425360       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0109 00:16:12.527119       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0109 00:16:12.527134       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [5374a9cceed08bada4a14d906a1f4f49a10ef201a2b41cd3d6c21c0bd0749f47] <==
	I0109 00:15:48.945833       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0109 00:15:48.975469       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:48.985138       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:15:49.004582       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:15:49.010167       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:15:49.010339       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:15:49.010395       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:15:49.010433       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:15:49.010465       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:15:49.011181       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:15:49.011384       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:49.011690       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:15:49.977868       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:49.988200       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:15:50.006132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:15:50.011967       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:15:50.016013       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:15:50.017933       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:15:50.020171       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:15:50.022377       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:15:50.024051       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:15:50.025343       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:50.026221       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:16:09.509171       1 factory.go:585] pod is already present in the activeQ
	E0109 00:16:09.534150       1 factory.go:585] pod is already present in the activeQ
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:10:06 UTC, ends at Tue 2024-01-09 00:29:09 UTC. --
	Jan 09 00:24:38 old-k8s-version-003293 kubelet[3118]: E0109 00:24:38.196046    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:24:51 old-k8s-version-003293 kubelet[3118]: E0109 00:24:51.196412    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:04 old-k8s-version-003293 kubelet[3118]: E0109 00:25:04.195920    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:16 old-k8s-version-003293 kubelet[3118]: E0109 00:25:16.195689    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:28 old-k8s-version-003293 kubelet[3118]: E0109 00:25:28.195423    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:39 old-k8s-version-003293 kubelet[3118]: E0109 00:25:39.195967    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:25:42 old-k8s-version-003293 kubelet[3118]: E0109 00:25:42.284179    3118 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 09 00:25:53 old-k8s-version-003293 kubelet[3118]: E0109 00:25:53.195628    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:05 old-k8s-version-003293 kubelet[3118]: E0109 00:26:05.195874    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:19 old-k8s-version-003293 kubelet[3118]: E0109 00:26:19.196756    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:30 old-k8s-version-003293 kubelet[3118]: E0109 00:26:30.196395    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:41 old-k8s-version-003293 kubelet[3118]: E0109 00:26:41.197990    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:26:54 old-k8s-version-003293 kubelet[3118]: E0109 00:26:54.195656    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:27:06 old-k8s-version-003293 kubelet[3118]: E0109 00:27:06.200108    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:27:20 old-k8s-version-003293 kubelet[3118]: E0109 00:27:20.248197    3118 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:27:20 old-k8s-version-003293 kubelet[3118]: E0109 00:27:20.248292    3118 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:27:20 old-k8s-version-003293 kubelet[3118]: E0109 00:27:20.248362    3118 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 09 00:27:20 old-k8s-version-003293 kubelet[3118]: E0109 00:27:20.248397    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 09 00:27:35 old-k8s-version-003293 kubelet[3118]: E0109 00:27:35.196548    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:27:50 old-k8s-version-003293 kubelet[3118]: E0109 00:27:50.195883    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:28:04 old-k8s-version-003293 kubelet[3118]: E0109 00:28:04.195930    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:28:16 old-k8s-version-003293 kubelet[3118]: E0109 00:28:16.195846    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:28:31 old-k8s-version-003293 kubelet[3118]: E0109 00:28:31.196023    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:28:46 old-k8s-version-003293 kubelet[3118]: E0109 00:28:46.195538    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 09 00:28:57 old-k8s-version-003293 kubelet[3118]: E0109 00:28:57.195924    3118 pod_workers.go:191] Error syncing pod 88b6acd7-0f5c-4358-a202-1d3a6b045b77 ("metrics-server-74d5856cc6-xdjs4_kube-system(88b6acd7-0f5c-4358-a202-1d3a6b045b77)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [9e25cd2c892d164617a5d58dbcbe63511fa19646051eeacc2b6d6f0227eaf52a] <==
	I0109 00:16:13.104836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:16:13.123900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:16:13.124176       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:16:13.137182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:16:13.138271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4fa71a0-98bf-489e-a78f-c5ca48fc8f89", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-003293_d0403520-5de8-4cde-b25f-e79b49df3098 became leader
	I0109 00:16:13.142528       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-003293_d0403520-5de8-4cde-b25f-e79b49df3098!
	I0109 00:16:13.243130       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-003293_d0403520-5de8-4cde-b25f-e79b49df3098!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003293 -n old-k8s-version-003293
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-003293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-xdjs4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-003293 describe pod metrics-server-74d5856cc6-xdjs4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-003293 describe pod metrics-server-74d5856cc6-xdjs4: exit status 1 (69.381134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-xdjs4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-003293 describe pod metrics-server-74d5856cc6-xdjs4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (168.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (129.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:28:36.012655  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:28:52.661794  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845373 -n embed-certs-845373
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:30:17.977731158 +0000 UTC m=+5916.854681367
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-845373 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-845373 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.86µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-845373 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-845373 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-845373 logs -n 25: (1.418579456s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	| start   | -p newest-cni-745275 --memory=2200 --alsologtostderr   | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	| addons  | enable metrics-server -p newest-cni-745275             | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:29:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:29:11.688732  457766 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:29:11.688871  457766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:29:11.688878  457766 out.go:309] Setting ErrFile to fd 2...
	I0109 00:29:11.688885  457766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:29:11.689175  457766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:29:11.689819  457766 out.go:303] Setting JSON to false
	I0109 00:29:11.690900  457766 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18678,"bootTime":1704741474,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:29:11.690970  457766 start.go:138] virtualization: kvm guest
	I0109 00:29:11.693663  457766 out.go:177] * [newest-cni-745275] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:29:11.695220  457766 notify.go:220] Checking for updates...
	I0109 00:29:11.696511  457766 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:29:11.697854  457766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:29:11.699113  457766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:29:11.700713  457766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:11.702142  457766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:29:11.703441  457766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:29:11.705450  457766 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:29:11.705599  457766 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:29:11.705733  457766 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:29:11.705960  457766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:29:11.753468  457766 out.go:177] * Using the kvm2 driver based on user configuration
	I0109 00:29:11.755037  457766 start.go:298] selected driver: kvm2
	I0109 00:29:11.755057  457766 start.go:902] validating driver "kvm2" against <nil>
	I0109 00:29:11.755077  457766 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:29:11.755971  457766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:29:11.756044  457766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:29:11.773794  457766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:29:11.773854  457766 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0109 00:29:11.773927  457766 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0109 00:29:11.776728  457766 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0109 00:29:11.776819  457766 cni.go:84] Creating CNI manager for ""
	I0109 00:29:11.776837  457766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:29:11.776868  457766 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0109 00:29:11.776889  457766 start_flags.go:323] config:
	{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:29:11.777170  457766 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:29:11.779020  457766 out.go:177] * Starting control plane node newest-cni-745275 in cluster newest-cni-745275
	I0109 00:29:11.780422  457766 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:29:11.780468  457766 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0109 00:29:11.780477  457766 cache.go:56] Caching tarball of preloaded images
	I0109 00:29:11.780546  457766 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:29:11.780557  457766 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0109 00:29:11.780646  457766 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:29:11.780691  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json: {Name:mk4d641c387ca3ed27cddd141100c40e37d72082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:11.780835  457766 start.go:365] acquiring machines lock for newest-cni-745275: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:29:11.780874  457766 start.go:369] acquired machines lock for "newest-cni-745275" in 24.81µs
	I0109 00:29:11.780899  457766 start.go:93] Provisioning new machine with config: &{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:29:11.780969  457766 start.go:125] createHost starting for "" (driver="kvm2")
	I0109 00:29:11.782998  457766 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0109 00:29:11.783142  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:29:11.783177  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:29:11.801506  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0109 00:29:11.802033  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:29:11.802719  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:29:11.802750  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:29:11.803299  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:29:11.803551  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:11.803725  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:11.803909  457766 start.go:159] libmachine.API.Create for "newest-cni-745275" (driver="kvm2")
	I0109 00:29:11.803941  457766 client.go:168] LocalClient.Create starting
	I0109 00:29:11.804008  457766 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0109 00:29:11.804041  457766 main.go:141] libmachine: Decoding PEM data...
	I0109 00:29:11.804055  457766 main.go:141] libmachine: Parsing certificate...
	I0109 00:29:11.804123  457766 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0109 00:29:11.804144  457766 main.go:141] libmachine: Decoding PEM data...
	I0109 00:29:11.804153  457766 main.go:141] libmachine: Parsing certificate...
	I0109 00:29:11.804168  457766 main.go:141] libmachine: Running pre-create checks...
	I0109 00:29:11.804179  457766 main.go:141] libmachine: (newest-cni-745275) Calling .PreCreateCheck
	I0109 00:29:11.804568  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:29:11.805090  457766 main.go:141] libmachine: Creating machine...
	I0109 00:29:11.805105  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Create
	I0109 00:29:11.805267  457766 main.go:141] libmachine: (newest-cni-745275) Creating KVM machine...
	I0109 00:29:11.806298  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found existing default KVM network
	I0109 00:29:11.807865  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.807663  457807 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0b:0a:00} reservation:<nil>}
	I0109 00:29:11.808753  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.808667  457807 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:6a:ce} reservation:<nil>}
	I0109 00:29:11.809620  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.809526  457807 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:71:02:68} reservation:<nil>}
	I0109 00:29:11.810855  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.810788  457807 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000304eb0}
	I0109 00:29:11.816157  457766 main.go:141] libmachine: (newest-cni-745275) DBG | trying to create private KVM network mk-newest-cni-745275 192.168.72.0/24...
	I0109 00:29:11.905107  457766 main.go:141] libmachine: (newest-cni-745275) DBG | private KVM network mk-newest-cni-745275 192.168.72.0/24 created
	I0109 00:29:11.905148  457766 main.go:141] libmachine: (newest-cni-745275) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275 ...
	I0109 00:29:11.905161  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.905052  457807 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:11.905175  457766 main.go:141] libmachine: (newest-cni-745275) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0109 00:29:11.905263  457766 main.go:141] libmachine: (newest-cni-745275) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0109 00:29:12.174015  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:12.173876  457807 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa...
	I0109 00:29:12.447386  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:12.447209  457807 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/newest-cni-745275.rawdisk...
	I0109 00:29:12.447429  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Writing magic tar header
	I0109 00:29:12.447522  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Writing SSH key tar header
	I0109 00:29:12.447655  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:12.447569  457807 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275 ...
	I0109 00:29:12.447748  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275
	I0109 00:29:12.448081  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275 (perms=drwx------)
	I0109 00:29:12.448115  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0109 00:29:12.448130  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0109 00:29:12.448150  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0109 00:29:12.448166  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0109 00:29:12.448178  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:12.448197  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0109 00:29:12.448213  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0109 00:29:12.448227  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0109 00:29:12.448242  457766 main.go:141] libmachine: (newest-cni-745275) Creating domain...
	I0109 00:29:12.448254  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0109 00:29:12.448272  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins
	I0109 00:29:12.448284  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home
	I0109 00:29:12.448300  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Skipping /home - not owner
	I0109 00:29:12.449799  457766 main.go:141] libmachine: (newest-cni-745275) define libvirt domain using xml: 
	I0109 00:29:12.449822  457766 main.go:141] libmachine: (newest-cni-745275) <domain type='kvm'>
	I0109 00:29:12.449859  457766 main.go:141] libmachine: (newest-cni-745275)   <name>newest-cni-745275</name>
	I0109 00:29:12.449886  457766 main.go:141] libmachine: (newest-cni-745275)   <memory unit='MiB'>2200</memory>
	I0109 00:29:12.449895  457766 main.go:141] libmachine: (newest-cni-745275)   <vcpu>2</vcpu>
	I0109 00:29:12.449900  457766 main.go:141] libmachine: (newest-cni-745275)   <features>
	I0109 00:29:12.449907  457766 main.go:141] libmachine: (newest-cni-745275)     <acpi/>
	I0109 00:29:12.449914  457766 main.go:141] libmachine: (newest-cni-745275)     <apic/>
	I0109 00:29:12.449920  457766 main.go:141] libmachine: (newest-cni-745275)     <pae/>
	I0109 00:29:12.449928  457766 main.go:141] libmachine: (newest-cni-745275)     
	I0109 00:29:12.449934  457766 main.go:141] libmachine: (newest-cni-745275)   </features>
	I0109 00:29:12.449942  457766 main.go:141] libmachine: (newest-cni-745275)   <cpu mode='host-passthrough'>
	I0109 00:29:12.449954  457766 main.go:141] libmachine: (newest-cni-745275)   
	I0109 00:29:12.449970  457766 main.go:141] libmachine: (newest-cni-745275)   </cpu>
	I0109 00:29:12.449983  457766 main.go:141] libmachine: (newest-cni-745275)   <os>
	I0109 00:29:12.449994  457766 main.go:141] libmachine: (newest-cni-745275)     <type>hvm</type>
	I0109 00:29:12.450004  457766 main.go:141] libmachine: (newest-cni-745275)     <boot dev='cdrom'/>
	I0109 00:29:12.450009  457766 main.go:141] libmachine: (newest-cni-745275)     <boot dev='hd'/>
	I0109 00:29:12.450018  457766 main.go:141] libmachine: (newest-cni-745275)     <bootmenu enable='no'/>
	I0109 00:29:12.450023  457766 main.go:141] libmachine: (newest-cni-745275)   </os>
	I0109 00:29:12.450035  457766 main.go:141] libmachine: (newest-cni-745275)   <devices>
	I0109 00:29:12.450050  457766 main.go:141] libmachine: (newest-cni-745275)     <disk type='file' device='cdrom'>
	I0109 00:29:12.450070  457766 main.go:141] libmachine: (newest-cni-745275)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/boot2docker.iso'/>
	I0109 00:29:12.450083  457766 main.go:141] libmachine: (newest-cni-745275)       <target dev='hdc' bus='scsi'/>
	I0109 00:29:12.450106  457766 main.go:141] libmachine: (newest-cni-745275)       <readonly/>
	I0109 00:29:12.450118  457766 main.go:141] libmachine: (newest-cni-745275)     </disk>
	I0109 00:29:12.450155  457766 main.go:141] libmachine: (newest-cni-745275)     <disk type='file' device='disk'>
	I0109 00:29:12.450182  457766 main.go:141] libmachine: (newest-cni-745275)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0109 00:29:12.450213  457766 main.go:141] libmachine: (newest-cni-745275)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/newest-cni-745275.rawdisk'/>
	I0109 00:29:12.450231  457766 main.go:141] libmachine: (newest-cni-745275)       <target dev='hda' bus='virtio'/>
	I0109 00:29:12.450242  457766 main.go:141] libmachine: (newest-cni-745275)     </disk>
	I0109 00:29:12.450252  457766 main.go:141] libmachine: (newest-cni-745275)     <interface type='network'>
	I0109 00:29:12.450264  457766 main.go:141] libmachine: (newest-cni-745275)       <source network='mk-newest-cni-745275'/>
	I0109 00:29:12.450273  457766 main.go:141] libmachine: (newest-cni-745275)       <model type='virtio'/>
	I0109 00:29:12.450290  457766 main.go:141] libmachine: (newest-cni-745275)     </interface>
	I0109 00:29:12.450299  457766 main.go:141] libmachine: (newest-cni-745275)     <interface type='network'>
	I0109 00:29:12.450309  457766 main.go:141] libmachine: (newest-cni-745275)       <source network='default'/>
	I0109 00:29:12.450319  457766 main.go:141] libmachine: (newest-cni-745275)       <model type='virtio'/>
	I0109 00:29:12.450335  457766 main.go:141] libmachine: (newest-cni-745275)     </interface>
	I0109 00:29:12.450346  457766 main.go:141] libmachine: (newest-cni-745275)     <serial type='pty'>
	I0109 00:29:12.450359  457766 main.go:141] libmachine: (newest-cni-745275)       <target port='0'/>
	I0109 00:29:12.450370  457766 main.go:141] libmachine: (newest-cni-745275)     </serial>
	I0109 00:29:12.450383  457766 main.go:141] libmachine: (newest-cni-745275)     <console type='pty'>
	I0109 00:29:12.450393  457766 main.go:141] libmachine: (newest-cni-745275)       <target type='serial' port='0'/>
	I0109 00:29:12.450411  457766 main.go:141] libmachine: (newest-cni-745275)     </console>
	I0109 00:29:12.450420  457766 main.go:141] libmachine: (newest-cni-745275)     <rng model='virtio'>
	I0109 00:29:12.450435  457766 main.go:141] libmachine: (newest-cni-745275)       <backend model='random'>/dev/random</backend>
	I0109 00:29:12.450446  457766 main.go:141] libmachine: (newest-cni-745275)     </rng>
	I0109 00:29:12.450456  457766 main.go:141] libmachine: (newest-cni-745275)     
	I0109 00:29:12.450465  457766 main.go:141] libmachine: (newest-cni-745275)     
	I0109 00:29:12.450475  457766 main.go:141] libmachine: (newest-cni-745275)   </devices>
	I0109 00:29:12.450487  457766 main.go:141] libmachine: (newest-cni-745275) </domain>
	I0109 00:29:12.450499  457766 main.go:141] libmachine: (newest-cni-745275) 
	I0109 00:29:12.455338  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:55:63:71 in network default
	I0109 00:29:12.456135  457766 main.go:141] libmachine: (newest-cni-745275) Ensuring networks are active...
	I0109 00:29:12.456162  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:12.456921  457766 main.go:141] libmachine: (newest-cni-745275) Ensuring network default is active
	I0109 00:29:12.457333  457766 main.go:141] libmachine: (newest-cni-745275) Ensuring network mk-newest-cni-745275 is active
	I0109 00:29:12.458065  457766 main.go:141] libmachine: (newest-cni-745275) Getting domain xml...
	I0109 00:29:12.459025  457766 main.go:141] libmachine: (newest-cni-745275) Creating domain...
	I0109 00:29:13.885256  457766 main.go:141] libmachine: (newest-cni-745275) Waiting to get IP...
	I0109 00:29:13.886297  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:13.886750  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:13.886893  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:13.886752  457807 retry.go:31] will retry after 257.298601ms: waiting for machine to come up
	I0109 00:29:14.145529  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:14.146148  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:14.146205  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:14.146086  457807 retry.go:31] will retry after 364.099957ms: waiting for machine to come up
	I0109 00:29:14.511860  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:14.512383  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:14.512415  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:14.512329  457807 retry.go:31] will retry after 457.359198ms: waiting for machine to come up
	I0109 00:29:14.970920  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:14.971439  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:14.971527  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:14.971440  457807 retry.go:31] will retry after 515.451223ms: waiting for machine to come up
	I0109 00:29:15.488173  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:15.488716  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:15.488747  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:15.488663  457807 retry.go:31] will retry after 493.074085ms: waiting for machine to come up
	I0109 00:29:15.983436  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:15.983927  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:15.983960  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:15.983857  457807 retry.go:31] will retry after 916.090818ms: waiting for machine to come up
	I0109 00:29:16.901416  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:16.901879  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:16.901907  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:16.901829  457807 retry.go:31] will retry after 1.157895775s: waiting for machine to come up
	I0109 00:29:18.061691  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:18.062252  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:18.062277  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:18.062198  457807 retry.go:31] will retry after 1.397423702s: waiting for machine to come up
	I0109 00:29:19.461173  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:19.461627  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:19.461651  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:19.461581  457807 retry.go:31] will retry after 1.332950781s: waiting for machine to come up
	I0109 00:29:20.796107  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:20.796540  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:20.796574  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:20.796482  457807 retry.go:31] will retry after 2.241146328s: waiting for machine to come up
	I0109 00:29:23.039833  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:23.040390  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:23.040424  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:23.040328  457807 retry.go:31] will retry after 2.022201691s: waiting for machine to come up
	I0109 00:29:25.064723  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:25.065170  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:25.065201  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:25.065127  457807 retry.go:31] will retry after 3.398624103s: waiting for machine to come up
	I0109 00:29:28.465932  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:28.466445  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:28.466474  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:28.466413  457807 retry.go:31] will retry after 3.878176349s: waiting for machine to come up
	I0109 00:29:32.346143  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:32.346822  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:32.346850  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:32.346770  457807 retry.go:31] will retry after 5.266293301s: waiting for machine to come up
	I0109 00:29:37.614760  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:37.615259  457766 main.go:141] libmachine: (newest-cni-745275) Found IP for machine: 192.168.72.107
	I0109 00:29:37.615281  457766 main.go:141] libmachine: (newest-cni-745275) Reserving static IP address...
	I0109 00:29:37.615291  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has current primary IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:37.615715  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find host DHCP lease matching {name: "newest-cni-745275", mac: "52:54:00:41:55:15", ip: "192.168.72.107"} in network mk-newest-cni-745275
	I0109 00:29:37.697767  457766 main.go:141] libmachine: (newest-cni-745275) Reserved static IP address: 192.168.72.107
	I0109 00:29:37.697805  457766 main.go:141] libmachine: (newest-cni-745275) Waiting for SSH to be available...
	I0109 00:29:37.697822  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Getting to WaitForSSH function...
	I0109 00:29:37.700543  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:37.700933  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275
	I0109 00:29:37.700974  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find defined IP address of network mk-newest-cni-745275 interface with MAC address 52:54:00:41:55:15
	I0109 00:29:37.701130  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH client type: external
	I0109 00:29:37.701158  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa (-rw-------)
	I0109 00:29:37.701202  457766 main.go:141] libmachine: (newest-cni-745275) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:29:37.701235  457766 main.go:141] libmachine: (newest-cni-745275) DBG | About to run SSH command:
	I0109 00:29:37.701260  457766 main.go:141] libmachine: (newest-cni-745275) DBG | exit 0
	I0109 00:29:37.705117  457766 main.go:141] libmachine: (newest-cni-745275) DBG | SSH cmd err, output: exit status 255: 
	I0109 00:29:37.705145  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0109 00:29:37.705157  457766 main.go:141] libmachine: (newest-cni-745275) DBG | command : exit 0
	I0109 00:29:37.705175  457766 main.go:141] libmachine: (newest-cni-745275) DBG | err     : exit status 255
	I0109 00:29:37.705190  457766 main.go:141] libmachine: (newest-cni-745275) DBG | output  : 
	I0109 00:29:40.707273  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Getting to WaitForSSH function...
	I0109 00:29:40.709962  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.710410  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:40.710444  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.710611  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH client type: external
	I0109 00:29:40.710635  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa (-rw-------)
	I0109 00:29:40.710667  457766 main.go:141] libmachine: (newest-cni-745275) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:29:40.710682  457766 main.go:141] libmachine: (newest-cni-745275) DBG | About to run SSH command:
	I0109 00:29:40.710730  457766 main.go:141] libmachine: (newest-cni-745275) DBG | exit 0
	I0109 00:29:40.807441  457766 main.go:141] libmachine: (newest-cni-745275) DBG | SSH cmd err, output: <nil>: 
	I0109 00:29:40.807710  457766 main.go:141] libmachine: (newest-cni-745275) KVM machine creation complete!
	I0109 00:29:40.808079  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:29:40.808688  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:40.808920  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:40.809099  457766 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0109 00:29:40.809117  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetState
	I0109 00:29:40.810518  457766 main.go:141] libmachine: Detecting operating system of created instance...
	I0109 00:29:40.810540  457766 main.go:141] libmachine: Waiting for SSH to be available...
	I0109 00:29:40.810550  457766 main.go:141] libmachine: Getting to WaitForSSH function...
	I0109 00:29:40.810560  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:40.812874  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.813307  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:40.813336  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.813505  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:40.813684  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.813871  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.814046  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:40.814231  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:40.814616  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:40.814636  457766 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0109 00:29:40.947086  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:29:40.947115  457766 main.go:141] libmachine: Detecting the provisioner...
	I0109 00:29:40.947128  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:40.950358  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.950703  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:40.950734  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.950920  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:40.951166  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.951378  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.951574  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:40.951725  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:40.952096  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:40.952111  457766 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0109 00:29:41.084522  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0109 00:29:41.084650  457766 main.go:141] libmachine: found compatible host: buildroot
	I0109 00:29:41.084661  457766 main.go:141] libmachine: Provisioning with buildroot...
	I0109 00:29:41.084669  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:41.084970  457766 buildroot.go:166] provisioning hostname "newest-cni-745275"
	I0109 00:29:41.084999  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:41.085253  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.088254  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.088619  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.088655  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.088827  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.089025  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.089274  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.089398  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.089634  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:41.090013  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:41.090033  457766 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-745275 && echo "newest-cni-745275" | sudo tee /etc/hostname
	I0109 00:29:41.236695  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-745275
	
	I0109 00:29:41.236723  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.239668  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.240094  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.240125  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.240267  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.240502  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.240741  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.240920  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.241115  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:41.241494  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:41.241515  457766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-745275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-745275/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-745275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:29:41.380280  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:29:41.380320  457766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:29:41.380351  457766 buildroot.go:174] setting up certificates
	I0109 00:29:41.380364  457766 provision.go:83] configureAuth start
	I0109 00:29:41.380383  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:41.380753  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:41.383713  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.384169  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.384199  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.384384  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.386919  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.387253  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.387288  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.387451  457766 provision.go:138] copyHostCerts
	I0109 00:29:41.387522  457766 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:29:41.387535  457766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:29:41.387616  457766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:29:41.387729  457766 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:29:41.387741  457766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:29:41.387776  457766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:29:41.387905  457766 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:29:41.387919  457766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:29:41.387946  457766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:29:41.388025  457766 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.newest-cni-745275 san=[192.168.72.107 192.168.72.107 localhost 127.0.0.1 minikube newest-cni-745275]
	I0109 00:29:41.559865  457766 provision.go:172] copyRemoteCerts
	I0109 00:29:41.559961  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:29:41.560000  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.563118  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.563527  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.563560  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.563751  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.563963  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.564157  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.564319  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:41.662599  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:29:41.687491  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:29:41.712388  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:29:41.735693  457766 provision.go:86] duration metric: configureAuth took 355.307403ms
	I0109 00:29:41.735746  457766 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:29:41.735982  457766 config.go:182] Loaded profile config "newest-cni-745275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:29:41.736141  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.739339  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.739733  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.739782  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.739997  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.740220  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.740424  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.740616  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.740790  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:41.741147  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:41.741164  457766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:29:42.087109  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:29:42.087137  457766 main.go:141] libmachine: Checking connection to Docker...
	I0109 00:29:42.087146  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetURL
	I0109 00:29:42.088585  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using libvirt version 6000000
	I0109 00:29:42.091535  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.091932  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.092002  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.092306  457766 main.go:141] libmachine: Docker is up and running!
	I0109 00:29:42.092323  457766 main.go:141] libmachine: Reticulating splines...
	I0109 00:29:42.092330  457766 client.go:171] LocalClient.Create took 30.288379146s
	I0109 00:29:42.092353  457766 start.go:167] duration metric: libmachine.API.Create for "newest-cni-745275" took 30.288444437s
	I0109 00:29:42.092367  457766 start.go:300] post-start starting for "newest-cni-745275" (driver="kvm2")
	I0109 00:29:42.092385  457766 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:29:42.092422  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.092673  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:29:42.092703  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.095192  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.095710  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.095748  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.095999  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.096219  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.096385  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.096612  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:42.197392  457766 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:29:42.201898  457766 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:29:42.201924  457766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:29:42.202008  457766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:29:42.202099  457766 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:29:42.202191  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:29:42.212292  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:29:42.235838  457766 start.go:303] post-start completed in 143.455436ms
	I0109 00:29:42.235889  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:29:42.236504  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:42.239467  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.239895  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.239929  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.240222  457766 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:29:42.240442  457766 start.go:128] duration metric: createHost completed in 30.459457123s
	I0109 00:29:42.240510  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.243202  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.243645  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.243674  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.243768  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.243961  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.244120  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.244288  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.244453  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:42.244790  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:42.244803  457766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:29:42.380213  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760182.368757092
	
	I0109 00:29:42.380244  457766 fix.go:206] guest clock: 1704760182.368757092
	I0109 00:29:42.380255  457766 fix.go:219] Guest: 2024-01-09 00:29:42.368757092 +0000 UTC Remote: 2024-01-09 00:29:42.240492728 +0000 UTC m=+30.609810626 (delta=128.264364ms)
	I0109 00:29:42.380303  457766 fix.go:190] guest clock delta is within tolerance: 128.264364ms
	I0109 00:29:42.380315  457766 start.go:83] releasing machines lock for "newest-cni-745275", held for 30.599428284s
	I0109 00:29:42.380348  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.380674  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:42.383692  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.384056  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.384083  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.384304  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.384839  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.385054  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.385152  457766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:29:42.385216  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.385292  457766 ssh_runner.go:195] Run: cat /version.json
	I0109 00:29:42.385322  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.387742  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388077  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.388112  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388133  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388349  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.388531  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.388664  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.388675  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.388686  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388795  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.388861  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:42.388964  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.389119  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.389265  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:42.516788  457766 ssh_runner.go:195] Run: systemctl --version
	I0109 00:29:42.522882  457766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:29:42.692632  457766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:29:42.699734  457766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:29:42.699838  457766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:29:42.716543  457766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:29:42.716573  457766 start.go:475] detecting cgroup driver to use...
	I0109 00:29:42.716655  457766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:29:42.730924  457766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:29:42.744175  457766 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:29:42.744247  457766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:29:42.762474  457766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:29:42.777122  457766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:29:42.883698  457766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:29:43.008307  457766 docker.go:219] disabling docker service ...
	I0109 00:29:43.008407  457766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:29:43.022895  457766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:29:43.037037  457766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:29:43.172277  457766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:29:43.296071  457766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:29:43.310145  457766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:29:43.328944  457766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:29:43.329010  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.339234  457766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:29:43.339319  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.349544  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.360020  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.370015  457766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:29:43.381521  457766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:29:43.390544  457766 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:29:43.390612  457766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:29:43.402554  457766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:29:43.411937  457766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:29:43.512803  457766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:29:43.699559  457766 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:29:43.699691  457766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:29:43.705550  457766 start.go:543] Will wait 60s for crictl version
	I0109 00:29:43.705617  457766 ssh_runner.go:195] Run: which crictl
	I0109 00:29:43.709699  457766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:29:43.756776  457766 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:29:43.756890  457766 ssh_runner.go:195] Run: crio --version
	I0109 00:29:43.813309  457766 ssh_runner.go:195] Run: crio --version
	I0109 00:29:43.868764  457766 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:29:43.870210  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:43.873161  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:43.873586  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:43.873627  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:43.873791  457766 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:29:43.878461  457766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:29:43.890679  457766 localpath.go:92] copying /home/jenkins/minikube-integration/17830-399915/.minikube/client.crt -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.crt
	I0109 00:29:43.890881  457766 localpath.go:117] copying /home/jenkins/minikube-integration/17830-399915/.minikube/client.key -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.key
	I0109 00:29:43.892918  457766 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0109 00:29:43.894316  457766 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:29:43.894390  457766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:29:43.930475  457766 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:29:43.930542  457766 ssh_runner.go:195] Run: which lz4
	I0109 00:29:43.935014  457766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:29:43.939648  457766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:29:43.939678  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0109 00:29:45.521781  457766 crio.go:444] Took 1.586795 seconds to copy over tarball
	I0109 00:29:45.521895  457766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:29:48.387678  457766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865742394s)
	I0109 00:29:48.387712  457766 crio.go:451] Took 2.865896 seconds to extract the tarball
	I0109 00:29:48.387725  457766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:29:48.427863  457766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:29:48.509622  457766 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:29:48.509655  457766 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:29:48.509806  457766 ssh_runner.go:195] Run: crio config
	I0109 00:29:48.569393  457766 cni.go:84] Creating CNI manager for ""
	I0109 00:29:48.569416  457766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:29:48.569444  457766 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0109 00:29:48.569468  457766 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-745275 NodeName:newest-cni-745275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:29:48.569616  457766 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-745275"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:29:48.569722  457766 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-745275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:29:48.569794  457766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:29:48.579381  457766 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:29:48.579468  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:29:48.588465  457766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0109 00:29:48.606489  457766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:29:48.624398  457766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0109 00:29:48.642459  457766 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0109 00:29:48.646734  457766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:29:48.659922  457766 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275 for IP: 192.168.72.107
	I0109 00:29:48.659967  457766 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.660171  457766 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:29:48.660239  457766 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:29:48.660342  457766 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.key
	I0109 00:29:48.660365  457766 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713
	I0109 00:29:48.660381  457766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713 with IP's: [192.168.72.107 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:29:48.784020  457766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713 ...
	I0109 00:29:48.784056  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713: {Name:mk8e582bd51932418656f089c541a853f2436e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.784249  457766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713 ...
	I0109 00:29:48.784266  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713: {Name:mk8b17554733fece7685e52b093a0cf81bbabb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.784367  457766 certs.go:337] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt
	I0109 00:29:48.784452  457766 certs.go:341] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key
	I0109 00:29:48.784532  457766 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key
	I0109 00:29:48.784558  457766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt with IP's: []
	I0109 00:29:48.925964  457766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt ...
	I0109 00:29:48.925996  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt: {Name:mkf40339ad77247d160ed5370260f9070f03d05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.926200  457766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key ...
	I0109 00:29:48.926224  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key: {Name:mk9d33377cda8fb82a9f36198a589923454968a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.926441  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:29:48.926482  457766 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:29:48.926499  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:29:48.926527  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:29:48.926550  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:29:48.926586  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:29:48.926644  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:29:48.927432  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:29:48.956801  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:29:48.981539  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:29:49.005384  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:29:49.030087  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:29:49.054961  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:29:49.077837  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:29:49.102386  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:29:49.125332  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:29:49.149956  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:29:49.175824  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:29:49.199449  457766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:29:49.216134  457766 ssh_runner.go:195] Run: openssl version
	I0109 00:29:49.222359  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:29:49.233099  457766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:29:49.237633  457766 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:29:49.237680  457766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:29:49.243173  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:29:49.254739  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:29:49.267126  457766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:29:49.272393  457766 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:29:49.272457  457766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:29:49.278065  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:29:49.288339  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:29:49.300031  457766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:29:49.305040  457766 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:29:49.305119  457766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:29:49.310769  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:29:49.321658  457766 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:29:49.326066  457766 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:29:49.326169  457766 kubeadm.go:404] StartCluster: {Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:29:49.326254  457766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:29:49.326308  457766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:29:49.365875  457766 cri.go:89] found id: ""
	I0109 00:29:49.365987  457766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:29:49.376570  457766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:29:49.386955  457766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:29:49.397047  457766 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:29:49.397109  457766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:29:49.502770  457766 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0109 00:29:49.502915  457766 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:29:49.780643  457766 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:29:49.780785  457766 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:29:49.780879  457766 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:29:50.021082  457766 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:29:50.242314  457766 out.go:204]   - Generating certificates and keys ...
	I0109 00:29:50.242428  457766 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:29:50.242543  457766 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:29:50.242633  457766 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:29:50.460477  457766 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:29:50.852988  457766 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:29:51.099764  457766 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:29:51.570378  457766 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:29:51.570641  457766 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-745275] and IPs [192.168.72.107 127.0.0.1 ::1]
	I0109 00:29:51.640432  457766 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:29:51.640644  457766 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-745275] and IPs [192.168.72.107 127.0.0.1 ::1]
	I0109 00:29:51.742035  457766 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:29:52.316853  457766 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:29:52.641755  457766 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:29:52.642077  457766 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:29:52.694782  457766 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:29:53.064310  457766 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0109 00:29:53.256345  457766 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:29:53.572594  457766 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:29:54.164580  457766 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:29:54.165506  457766 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:29:54.170678  457766 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:29:54.172664  457766 out.go:204]   - Booting up control plane ...
	I0109 00:29:54.172774  457766 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:29:54.172872  457766 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:29:54.172962  457766 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:29:54.190981  457766 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:29:54.193894  457766 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:29:54.194135  457766 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:29:54.357492  457766 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:30:02.360642  457766 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004470 seconds
	I0109 00:30:02.389072  457766 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:30:02.412486  457766 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:30:02.951288  457766 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:30:02.951549  457766 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-745275 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:30:03.469169  457766 kubeadm.go:322] [bootstrap-token] Using token: 3rjndu.dq7mm2sqtun3hipy
	I0109 00:30:03.470797  457766 out.go:204]   - Configuring RBAC rules ...
	I0109 00:30:03.470916  457766 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:30:03.476165  457766 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:30:03.489180  457766 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:30:03.497888  457766 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:30:03.502360  457766 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:30:03.510594  457766 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:30:03.526260  457766 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:30:03.817092  457766 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:30:03.883716  457766 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:30:03.883771  457766 kubeadm.go:322] 
	I0109 00:30:03.883865  457766 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:30:03.883882  457766 kubeadm.go:322] 
	I0109 00:30:03.883983  457766 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:30:03.883993  457766 kubeadm.go:322] 
	I0109 00:30:03.884023  457766 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:30:03.884093  457766 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:30:03.884180  457766 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:30:03.884203  457766 kubeadm.go:322] 
	I0109 00:30:03.884314  457766 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:30:03.884326  457766 kubeadm.go:322] 
	I0109 00:30:03.884386  457766 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:30:03.884395  457766 kubeadm.go:322] 
	I0109 00:30:03.884454  457766 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:30:03.884538  457766 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:30:03.884618  457766 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:30:03.884640  457766 kubeadm.go:322] 
	I0109 00:30:03.884765  457766 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:30:03.884872  457766 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:30:03.884884  457766 kubeadm.go:322] 
	I0109 00:30:03.884988  457766 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3rjndu.dq7mm2sqtun3hipy \
	I0109 00:30:03.885129  457766 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 \
	I0109 00:30:03.885161  457766 kubeadm.go:322] 	--control-plane 
	I0109 00:30:03.885171  457766 kubeadm.go:322] 
	I0109 00:30:03.885278  457766 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:30:03.885289  457766 kubeadm.go:322] 
	I0109 00:30:03.885391  457766 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3rjndu.dq7mm2sqtun3hipy \
	I0109 00:30:03.885530  457766 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:841a6cb1986c6740acdb208ee441c8236c362397b0832ac835c45c516297a8c2 
	I0109 00:30:03.886167  457766 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:30:03.886192  457766 cni.go:84] Creating CNI manager for ""
	I0109 00:30:03.886203  457766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:30:03.888121  457766 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0109 00:30:03.889442  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0109 00:30:03.907203  457766 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0109 00:30:03.934499  457766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:30:03.934581  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=newest-cni-745275 minikube.k8s.io/updated_at=2024_01_09T00_30_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:03.934586  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:04.275422  457766 ops.go:34] apiserver oom_adj: -16
	I0109 00:30:04.275656  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:04.776172  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:05.276519  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:05.776525  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:06.276622  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:06.776667  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:07.276530  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:07.776479  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:08.276586  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:08.776670  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:09.276718  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:09.776596  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:10.276631  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:10.775897  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:11.276153  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:11.775848  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:12.276438  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:12.776421  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:13.276430  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:13.775843  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:14.276778  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:14.776631  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:15.276393  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:15.775720  457766 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:30:15.924297  457766 kubeadm.go:1088] duration metric: took 11.989778405s to wait for elevateKubeSystemPrivileges.
	I0109 00:30:15.924345  457766 kubeadm.go:406] StartCluster complete in 26.598190841s
	I0109 00:30:15.924373  457766 settings.go:142] acquiring lock: {Name:mkaf19e111206082ea8cee1bf30ad44589520988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:30:15.924487  457766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:30:15.927048  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/kubeconfig: {Name:mkc1d3e5246bab5ce4f7345deeabe8c464944884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:30:15.927341  457766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:30:15.927383  457766 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:30:15.927466  457766 addons.go:69] Setting default-storageclass=true in profile "newest-cni-745275"
	I0109 00:30:15.927524  457766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-745275"
	I0109 00:30:15.927466  457766 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-745275"
	I0109 00:30:15.927576  457766 config.go:182] Loaded profile config "newest-cni-745275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:30:15.927622  457766 addons.go:237] Setting addon storage-provisioner=true in "newest-cni-745275"
	I0109 00:30:15.927678  457766 host.go:66] Checking if "newest-cni-745275" exists ...
	I0109 00:30:15.928049  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:15.928050  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:15.928085  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:15.928106  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:15.948193  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
	I0109 00:30:15.948687  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:15.949259  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:30:15.949285  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:15.949337  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0109 00:30:15.949684  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:15.949710  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:15.949918  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetState
	I0109 00:30:15.950266  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:30:15.950294  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:15.950780  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:15.951451  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:15.951488  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:15.953631  457766 addons.go:237] Setting addon default-storageclass=true in "newest-cni-745275"
	I0109 00:30:15.953668  457766 host.go:66] Checking if "newest-cni-745275" exists ...
	I0109 00:30:15.954045  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:15.954093  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:15.972222  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0109 00:30:15.972728  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:15.973309  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:30:15.973337  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:15.973686  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0109 00:30:15.973689  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:15.973935  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetState
	I0109 00:30:15.974077  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:15.974622  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:30:15.974645  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:15.974904  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:15.975428  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:30:15.975456  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:30:15.976267  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:15.978178  457766 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:30:15.979168  457766 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:30:15.979183  457766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:30:15.979198  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:15.982441  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:15.982817  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:15.982842  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:15.983101  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:15.983263  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:15.983461  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:15.983576  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:30:15.996504  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I0109 00:30:15.996937  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:30:15.997521  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:30:15.997536  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:30:15.997900  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:30:15.998093  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetState
	I0109 00:30:15.999789  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:30:16.000090  457766 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:30:16.000103  457766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:30:16.000121  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:30:16.002962  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:16.003343  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:30:16.003382  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:30:16.003667  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:30:16.003863  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:30:16.003993  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:30:16.004129  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:30:16.094402  457766 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:30:16.214291  457766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:30:16.219242  457766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:30:16.551947  457766 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-745275" context rescaled to 1 replicas
	I0109 00:30:16.552003  457766 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:30:16.554079  457766 out.go:177] * Verifying Kubernetes components...
	I0109 00:30:16.555606  457766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:30:16.706256  457766 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0109 00:30:16.986308  457766 main.go:141] libmachine: Making call to close driver server
	I0109 00:30:16.986342  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Close
	I0109 00:30:16.986368  457766 main.go:141] libmachine: Making call to close driver server
	I0109 00:30:16.986390  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Close
	I0109 00:30:16.986704  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Closing plugin on server side
	I0109 00:30:16.986747  457766 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:30:16.986761  457766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:30:16.986772  457766 main.go:141] libmachine: Making call to close driver server
	I0109 00:30:16.986789  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Close
	I0109 00:30:16.986868  457766 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:30:16.986883  457766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:30:16.986894  457766 main.go:141] libmachine: Making call to close driver server
	I0109 00:30:16.986905  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Close
	I0109 00:30:16.987588  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Closing plugin on server side
	I0109 00:30:16.987692  457766 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:30:16.987708  457766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:30:16.988156  457766 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:30:16.988238  457766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:30:16.988655  457766 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:30:16.988685  457766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:30:17.007950  457766 main.go:141] libmachine: Making call to close driver server
	I0109 00:30:17.007973  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Close
	I0109 00:30:17.008360  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Closing plugin on server side
	I0109 00:30:17.008384  457766 main.go:141] libmachine: Successfully made call to close driver server
	I0109 00:30:17.008397  457766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0109 00:30:17.010332  457766 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0109 00:30:17.011966  457766 addons.go:508] enable addons completed in 1.08457973s: enabled=[storage-provisioner default-storageclass]
	I0109 00:30:17.017205  457766 api_server.go:72] duration metric: took 465.163229ms to wait for apiserver process to appear ...
	I0109 00:30:17.017226  457766 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:30:17.017250  457766 api_server.go:253] Checking apiserver healthz at https://192.168.72.107:8443/healthz ...
	I0109 00:30:17.033821  457766 api_server.go:279] https://192.168.72.107:8443/healthz returned 200:
	ok
	I0109 00:30:17.035740  457766 api_server.go:141] control plane version: v1.29.0-rc.2
	I0109 00:30:17.035767  457766 api_server.go:131] duration metric: took 18.533662ms to wait for apiserver health ...
	I0109 00:30:17.035776  457766 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:30:17.064781  457766 system_pods.go:59] 8 kube-system pods found
	I0109 00:30:17.064831  457766 system_pods.go:61] "coredns-76f75df574-rg5xp" [c0db8c4d-1664-443e-bc19-41da4f420aee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:30:17.064852  457766 system_pods.go:61] "coredns-76f75df574-v48wh" [ca9b2ef2-0b6f-4873-9a60-f4bbb8165a32] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0109 00:30:17.064858  457766 system_pods.go:61] "etcd-newest-cni-745275" [4dc6595c-fea2-434f-9db4-3918ad4a32cc] Running
	I0109 00:30:17.064863  457766 system_pods.go:61] "kube-apiserver-newest-cni-745275" [c9765fc3-2bed-4f4f-bc95-c7d28b694837] Running
	I0109 00:30:17.064872  457766 system_pods.go:61] "kube-controller-manager-newest-cni-745275" [06afc51d-5d8b-4a1a-a0fa-9711e5f28dd8] Running
	I0109 00:30:17.064878  457766 system_pods.go:61] "kube-proxy-9jhk9" [77ecfc22-1711-44b0-bebb-d1f3e16b6e66] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0109 00:30:17.064884  457766 system_pods.go:61] "kube-scheduler-newest-cni-745275" [81bbef2e-805b-4294-87db-40ad7c81770e] Running
	I0109 00:30:17.064897  457766 system_pods.go:61] "storage-provisioner" [567a5b1f-9770-4c49-a1d6-7c0dd1154094] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0109 00:30:17.064909  457766 system_pods.go:74] duration metric: took 29.122013ms to wait for pod list to return data ...
	I0109 00:30:17.064921  457766 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:30:17.067657  457766 default_sa.go:45] found service account: "default"
	I0109 00:30:17.067684  457766 default_sa.go:55] duration metric: took 2.757273ms for default service account to be created ...
	I0109 00:30:17.067694  457766 kubeadm.go:581] duration metric: took 515.659792ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0109 00:30:17.067710  457766 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:30:17.071221  457766 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0109 00:30:17.071248  457766 node_conditions.go:123] node cpu capacity is 2
	I0109 00:30:17.071260  457766 node_conditions.go:105] duration metric: took 3.546363ms to run NodePressure ...
	I0109 00:30:17.071271  457766 start.go:228] waiting for startup goroutines ...
	I0109 00:30:17.071277  457766 start.go:233] waiting for cluster config update ...
	I0109 00:30:17.071287  457766 start.go:242] writing updated cluster config ...
	I0109 00:30:17.071543  457766 ssh_runner.go:195] Run: rm -f paused
	I0109 00:30:17.135568  457766 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0109 00:30:17.137782  457766 out.go:177] * Done! kubectl is now configured to use "newest-cni-745275" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:09:00 UTC, ends at Tue 2024-01-09 00:30:18 UTC. --
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.852779788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7221be53-c0e0-4e64-8adb-5d74059c8ded name=/runtime.v1.RuntimeService/Version
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.854797872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d6b0c9bc-3138-49be-94d1-66b63f6ef2e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.855403861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760218855370682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d6b0c9bc-3138-49be-94d1-66b63f6ef2e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.856373118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c81d18e1-c796-4b39-8306-517b8441889d name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.856420143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c81d18e1-c796-4b39-8306-517b8441889d name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.859150656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c81d18e1-c796-4b39-8306-517b8441889d name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.875280294Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=82338026-a5ba-4255-a05b-f4c85eb1f7ac name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.876600309Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:19e4933d-98fd-4607-bc51-e8e2ff8b65bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759270815389133,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-09T00:14:30.479501444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d5693e75258bd874d7482c615b9b29d42494b186f5bfcbc57943866d9086d2d3,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-zg66s,Uid:0052e55b-f5ad-4aea-9568-9a5f99033dc3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759270650773458,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-zg66s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0052e55b-f5ad-4aea-9568-9a5f99033dc
3,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:14:30.312062363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-j5mzp,Uid:79554198-e2ef-48e1-b6e3-fc3ea068778e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759268953234303,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:14:27.717116402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&PodSandboxMetadata{Name:kube-proxy-nxtn2,Uid:4bb69868-6675-4dc0-80c1-b3
b2dc0ba6df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759267951270462,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-09T00:14:27.603493506Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-845373,Uid:3f45ba2df6fdcefd9dfd934ee81f179e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759246679672771,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,tier: control-plane,},
Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.132:2379,kubernetes.io/config.hash: 3f45ba2df6fdcefd9dfd934ee81f179e,kubernetes.io/config.seen: 2024-01-09T00:14:06.094196688Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-845373,Uid:b642b3202f9439b200815317f33d9f62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759246664111349,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b200815317f33d9f62,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b642b3202f9439b200815317f33d9f62,kubernetes.io/config.seen: 2024-01-09T00:14:06.094203390Z,kubernetes.io/config.source: file,},Ru
ntimeHandler:,},&PodSandbox{Id:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-845373,Uid:c38e127fd5bc00b6942c03508d096ef2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759246644090533,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c38e127fd5bc00b6942c03508d096ef2,kubernetes.io/config.seen: 2024-01-09T00:14:06.094205121Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-845373,Uid:ed7bcc1cc36817fc87196f2cfc0eae17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704759246601708338,Labels
:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.132:8443,kubernetes.io/config.hash: ed7bcc1cc36817fc87196f2cfc0eae17,kubernetes.io/config.seen: 2024-01-09T00:14:06.094201188Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=82338026-a5ba-4255-a05b-f4c85eb1f7ac name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.879234116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e73c12a-8820-47ac-a476-a1713bd603f5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.879337017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e73c12a-8820-47ac-a476-a1713bd603f5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.879522162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e73c12a-8820-47ac-a476-a1713bd603f5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.918187071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=88b8cc86-c242-44f1-8d96-7b36e65ba281 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.918265108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=88b8cc86-c242-44f1-8d96-7b36e65ba281 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.920611024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aa239c96-74f7-419c-8ce7-5294680d6e3c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.921423726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760218921399357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aa239c96-74f7-419c-8ce7-5294680d6e3c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.922510249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=58687fd6-acf7-41e8-95c7-ed914ee6c852 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.922607226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=58687fd6-acf7-41e8-95c7-ed914ee6c852 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.923038706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=58687fd6-acf7-41e8-95c7-ed914ee6c852 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.970436659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=733d1390-2916-416a-96e2-5b33220dfa3f name=/runtime.v1.RuntimeService/Version
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.970543959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=733d1390-2916-416a-96e2-5b33220dfa3f name=/runtime.v1.RuntimeService/Version
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.972032757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7d055efb-793e-47e5-a747-0a7e209d3dae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.972700000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760218972635830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7d055efb-793e-47e5-a747-0a7e209d3dae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.973350226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3dbec32e-e9c5-47b5-b46c-291acd90ed70 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.973411843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3dbec32e-e9c5-47b5-b46c-291acd90ed70 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:30:18 embed-certs-845373 crio[735]: time="2024-01-09 00:30:18.973652305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c,PodSandboxId:ef8d1e250718b819ba98d58f5499e508aba1e2a1d9742942aa803835174caf11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704759271730742517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e4933d-98fd-4607-bc51-e8e2ff8b65bb,},Annotations:map[string]string{io.kubernetes.container.hash: dc9d0fba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757,PodSandboxId:d23535b03541785fb201855b4db544273dab976e9a74f664b5f71481f2fc395f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704759270851021090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5mzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79554198-e2ef-48e1-b6e3-fc3ea068778e,},Annotations:map[string]string{io.kubernetes.container.hash: e879578,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247,PodSandboxId:c4ba02b25054ab96485d85465654a29954ca9966443858ac52fff162fae94279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704759269029469107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxtn2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 4bb69868-6675-4dc0-80c1-b3b2dc0ba6df,},Annotations:map[string]string{io.kubernetes.container.hash: 9407db37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773,PodSandboxId:1f55e027c66550941a730fec2778177226681cfefd7900aea8ff33bb64eaf10f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704759248147728511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f45ba2df6fdcefd9dfd934ee81f179e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7674a831,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb,PodSandboxId:8c80a5a849a8c1d0399864c2c1a0ac328084c5b80cf8029d07f738f1632537e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704759247660970740,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38e127fd5bc00b6942c03508d096ef2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2,PodSandboxId:75714a74543d64a2fd7f4070bf43be5edf08daf9922f8e3de684b7f50f81829c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704759247485027560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b642b3202f9439b2008153
17f33d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9,PodSandboxId:ba0eedb3e2b2e58ad6a3713d7611fab09e2f5b1b4304233b91f6b41bf9ef790f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704759247160362892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845373,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed7bcc1cc36817fc87196f2cfc0eae17
,},Annotations:map[string]string{io.kubernetes.container.hash: c31878f6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3dbec32e-e9c5-47b5-b46c-291acd90ed70 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc47842bcf90f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   ef8d1e250718b       storage-provisioner
	deabd24b79316       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   d23535b035417       coredns-5dd5756b68-j5mzp
	6004d919ad63c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   c4ba02b25054a       kube-proxy-nxtn2
	004d97d95671f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   1f55e027c6655       etcd-embed-certs-845373
	e1948c9408655       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   8c80a5a849a8c       kube-scheduler-embed-certs-845373
	3e878d8b2a29f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   75714a74543d6       kube-controller-manager-embed-certs-845373
	a465e638ed034       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   ba0eedb3e2b2e       kube-apiserver-embed-certs-845373
	
	
	==> coredns [deabd24b793166ecd0e7ad21d4971522f6b43f9be22df5835c0a946724128757] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-845373
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-845373
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=embed-certs-845373
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_14_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-845373
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:30:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:29:53 +0000   Tue, 09 Jan 2024 00:14:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:29:53 +0000   Tue, 09 Jan 2024 00:14:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:29:53 +0000   Tue, 09 Jan 2024 00:14:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:29:53 +0000   Tue, 09 Jan 2024 00:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.132
	  Hostname:    embed-certs-845373
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e89c5eae0b8446369000f9c55a1cbbc6
	  System UUID:                e89c5eae-0b84-4636-9000-f9c55a1cbbc6
	  Boot ID:                    f4410abe-81e8-47b6-8742-776c205ebec1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j5mzp                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-845373                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-845373             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-845373    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-nxtn2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-845373             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-zg66s               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-845373 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-845373 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-845373 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m   kubelet          Node embed-certs-845373 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node embed-certs-845373 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-845373 event: Registered Node embed-certs-845373 in Controller
	
	
	==> dmesg <==
	[Jan 9 00:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066835] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.369305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.477955] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139146] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jan 9 00:09] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.866413] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.112627] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.152775] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.114459] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[  +0.229501] systemd-fstab-generator[719]: Ignoring "noauto" for root device
	[ +17.510670] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[ +20.055243] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 9 00:14] systemd-fstab-generator[3463]: Ignoring "noauto" for root device
	[  +9.300950] systemd-fstab-generator[3791]: Ignoring "noauto" for root device
	[ +14.155259] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [004d97d95671fa408ee57a54617d178fefcd9105f773ab5ca697ba18e6686773] <==
	{"level":"info","ts":"2024-01-09T00:14:10.165167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-09T00:14:10.165184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 received MsgPreVoteResp from 570016793c978bd8 at term 1"}
	{"level":"info","ts":"2024-01-09T00:14:10.165196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 became candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.165201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 received MsgVoteResp from 570016793c978bd8 at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.165209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"570016793c978bd8 became leader at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.165216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 570016793c978bd8 elected leader 570016793c978bd8 at term 2"}
	{"level":"info","ts":"2024-01-09T00:14:10.167079Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.168424Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"570016793c978bd8","local-member-attributes":"{Name:embed-certs-845373 ClientURLs:[https://192.168.50.132:2379]}","request-path":"/0/members/570016793c978bd8/attributes","cluster-id":"b8c14781592b9d32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:14:10.16912Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b8c14781592b9d32","local-member-id":"570016793c978bd8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.169241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.16929Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:14:10.169319Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:14:10.171101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.132:2379"}
	{"level":"info","ts":"2024-01-09T00:14:10.171176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:14:10.172288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:14:10.186687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:14:10.186759Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:24:10.22271Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2024-01-09T00:24:10.226438Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":678,"took":"2.783081ms","hash":737016404}
	{"level":"info","ts":"2024-01-09T00:24:10.22654Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":737016404,"revision":678,"compact-revision":-1}
	{"level":"info","ts":"2024-01-09T00:29:10.235178Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":921}
	{"level":"info","ts":"2024-01-09T00:29:10.2382Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":921,"took":"2.313623ms","hash":3821455183}
	{"level":"info","ts":"2024-01-09T00:29:10.238312Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3821455183,"revision":921,"compact-revision":678}
	{"level":"warn","ts":"2024-01-09T00:29:49.93867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.501745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-09T00:29:49.939012Z","caller":"traceutil/trace.go:171","msg":"trace[29733233] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1197; }","duration":"123.853154ms","start":"2024-01-09T00:29:49.815039Z","end":"2024-01-09T00:29:49.938892Z","steps":["trace[29733233] 'range keys from in-memory index tree'  (duration: 123.372255ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:30:19 up 21 min,  0 users,  load average: 0.27, 0.34, 0.24
	Linux embed-certs-845373 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a465e638ed034b1f4fa8da4b6d1816ada6d1051cffb39582953bfe5177f8c8f9] <==
	I0109 00:27:12.858335       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:27:12.860589       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:27:12.860661       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:27:12.860695       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:28:11.719732       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0109 00:29:11.719748       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:29:11.860983       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:29:11.861093       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:29:11.861721       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:29:12.862073       1 handler_proxy.go:93] no RequestInfo found in the context
	W0109 00:29:12.862094       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:29:12.862233       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:29:12.862259       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0109 00:29:12.862180       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:29:12.863626       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0109 00:30:11.720177       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0109 00:30:12.863242       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:30:12.863306       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:30:12.863319       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:30:12.864496       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:30:12.864568       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:30:12.864583       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3e878d8b2a29f302d66629fa0bfa0e39779f8ae7897166b12a8817ed6c6a5ae2] <==
	I0109 00:24:27.460539       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:24:56.977638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:24:57.470303       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:25:26.405274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="257.35µs"
	E0109 00:25:26.984198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:27.481046       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:25:37.407430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="187.048µs"
	E0109 00:25:56.991511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:57.491705       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:26.998719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:27.501239       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:57.005233       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:57.510819       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:27.011447       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:27.520059       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:57.018029       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:57.529620       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:27.024786       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:27.540065       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:57.031092       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:57.549370       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:27.038025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:27.563324       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:57.045246       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:57.572816       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6004d919ad63c11b7f87317619598781b037ba44d085c853ca885da0ac802247] <==
	I0109 00:14:30.706197       1 server_others.go:69] "Using iptables proxy"
	I0109 00:14:30.734014       1 node.go:141] Successfully retrieved node IP: 192.168.50.132
	I0109 00:14:31.356439       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0109 00:14:31.356546       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:14:31.392038       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:14:31.393727       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:14:31.394214       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:14:31.394254       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:14:31.397626       1 config.go:188] "Starting service config controller"
	I0109 00:14:31.398308       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:14:31.398368       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:14:31.398387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:14:31.403512       1 config.go:315] "Starting node config controller"
	I0109 00:14:31.403556       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:14:31.498756       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:14:31.509378       1 shared_informer.go:318] Caches are synced for node config
	I0109 00:14:31.509620       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e1948c9408655a94a4a4febcb9e1e73d6bc3e206c16c5c0b4671a196399f80fb] <==
	W0109 00:14:11.862144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:11.862623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:14:12.813477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:14:12.813525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0109 00:14:12.838106       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:14:12.838170       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:14:12.902299       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:14:12.902354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:14:12.911532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:12.911602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:14:12.922147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:14:12.922203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:14:12.968179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:12.968232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:14:13.004772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:14:13.004983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:14:13.036405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:14:13.036460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:14:13.063161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:14:13.063186       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:14:13.081880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:14:13.081989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:14:13.088989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:14:13.089015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0109 00:14:16.042559       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:09:00 UTC, ends at Tue 2024-01-09 00:30:19 UTC. --
	Jan 09 00:27:53 embed-certs-845373 kubelet[3798]: E0109 00:27:53.387008    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:28:05 embed-certs-845373 kubelet[3798]: E0109 00:28:05.386059    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:28:15 embed-certs-845373 kubelet[3798]: E0109 00:28:15.481319    3798 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:28:15 embed-certs-845373 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:28:15 embed-certs-845373 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:28:15 embed-certs-845373 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:28:19 embed-certs-845373 kubelet[3798]: E0109 00:28:19.386233    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:28:30 embed-certs-845373 kubelet[3798]: E0109 00:28:30.385560    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:28:43 embed-certs-845373 kubelet[3798]: E0109 00:28:43.385177    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:28:57 embed-certs-845373 kubelet[3798]: E0109 00:28:57.384983    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:29:10 embed-certs-845373 kubelet[3798]: E0109 00:29:10.385198    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:29:15 embed-certs-845373 kubelet[3798]: E0109 00:29:15.479671    3798 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:29:15 embed-certs-845373 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:29:15 embed-certs-845373 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:29:15 embed-certs-845373 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:29:15 embed-certs-845373 kubelet[3798]: E0109 00:29:15.528741    3798 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 09 00:29:21 embed-certs-845373 kubelet[3798]: E0109 00:29:21.386381    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:29:33 embed-certs-845373 kubelet[3798]: E0109 00:29:33.385058    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:29:46 embed-certs-845373 kubelet[3798]: E0109 00:29:46.384650    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:30:00 embed-certs-845373 kubelet[3798]: E0109 00:30:00.385229    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:30:12 embed-certs-845373 kubelet[3798]: E0109 00:30:12.385096    3798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zg66s" podUID="0052e55b-f5ad-4aea-9568-9a5f99033dc3"
	Jan 09 00:30:15 embed-certs-845373 kubelet[3798]: E0109 00:30:15.483405    3798 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:30:15 embed-certs-845373 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:30:15 embed-certs-845373 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:30:15 embed-certs-845373 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [cc47842bcf90f580ca45f68f0eddf0f96bff92fe97f5729aa9d6caf655439a7c] <==
	I0109 00:14:31.860803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:14:31.884016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:14:31.884384       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:14:31.897423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:14:31.898536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-845373_0b5f8b1d-4c1e-4143-94eb-a87e1023c69c!
	I0109 00:14:31.897868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73937ad1-88d2-476e-aac5-99db1703d35c", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-845373_0b5f8b1d-4c1e-4143-94eb-a87e1023c69c became leader
	I0109 00:14:32.000199       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-845373_0b5f8b1d-4c1e-4143-94eb-a87e1023c69c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845373 -n embed-certs-845373
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-845373 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-zg66s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-845373 describe pod metrics-server-57f55c9bc5-zg66s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-845373 describe pod metrics-server-57f55c9bc5-zg66s: exit status 1 (63.752692ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-zg66s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-845373 describe pod metrics-server-57f55c9bc5-zg66s: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (129.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (42.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0109 00:29:19.627466  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:29:22.327605  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:29:40.116732  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-378213 -n no-preload-378213
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-09 00:29:54.283167552 +0000 UTC m=+5893.160117760
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-378213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-378213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.85µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-378213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-378213 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-378213 logs -n 25: (1.357069651s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo                                  | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo find                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-976891 sudo crio                             | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:00 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-976891                                       | bridge-976891                | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-566492 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | disable-driver-mounts-566492                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:02 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-003293        | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-845373            | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-378213             | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-834116  | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC | 09 Jan 24 00:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:02 UTC |                     |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-003293             | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845373                 | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845373                                  | embed-certs-845373           | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-378213                  | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-378213                                   | no-preload-378213            | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-834116       | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-834116 | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:14 UTC |
	|         | default-k8s-diff-port-834116                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-003293                              | old-k8s-version-003293       | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	| start   | -p newest-cni-745275 --memory=2200 --alsologtostderr   | newest-cni-745275            | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:29:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:29:11.688732  457766 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:29:11.688871  457766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:29:11.688878  457766 out.go:309] Setting ErrFile to fd 2...
	I0109 00:29:11.688885  457766 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:29:11.689175  457766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0109 00:29:11.689819  457766 out.go:303] Setting JSON to false
	I0109 00:29:11.690900  457766 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18678,"bootTime":1704741474,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0109 00:29:11.690970  457766 start.go:138] virtualization: kvm guest
	I0109 00:29:11.693663  457766 out.go:177] * [newest-cni-745275] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0109 00:29:11.695220  457766 notify.go:220] Checking for updates...
	I0109 00:29:11.696511  457766 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:29:11.697854  457766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:29:11.699113  457766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0109 00:29:11.700713  457766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:11.702142  457766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0109 00:29:11.703441  457766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:29:11.705450  457766 config.go:182] Loaded profile config "default-k8s-diff-port-834116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:29:11.705599  457766 config.go:182] Loaded profile config "embed-certs-845373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:29:11.705733  457766 config.go:182] Loaded profile config "no-preload-378213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:29:11.705960  457766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:29:11.753468  457766 out.go:177] * Using the kvm2 driver based on user configuration
	I0109 00:29:11.755037  457766 start.go:298] selected driver: kvm2
	I0109 00:29:11.755057  457766 start.go:902] validating driver "kvm2" against <nil>
	I0109 00:29:11.755077  457766 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:29:11.755971  457766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:29:11.756044  457766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0109 00:29:11.773794  457766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0109 00:29:11.773854  457766 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0109 00:29:11.773927  457766 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0109 00:29:11.776728  457766 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0109 00:29:11.776819  457766 cni.go:84] Creating CNI manager for ""
	I0109 00:29:11.776837  457766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:29:11.776868  457766 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0109 00:29:11.776889  457766 start_flags.go:323] config:
	{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:29:11.777170  457766 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:29:11.779020  457766 out.go:177] * Starting control plane node newest-cni-745275 in cluster newest-cni-745275
	I0109 00:29:11.780422  457766 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:29:11.780468  457766 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0109 00:29:11.780477  457766 cache.go:56] Caching tarball of preloaded images
	I0109 00:29:11.780546  457766 preload.go:174] Found /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0109 00:29:11.780557  457766 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0109 00:29:11.780646  457766 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:29:11.780691  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json: {Name:mk4d641c387ca3ed27cddd141100c40e37d72082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:11.780835  457766 start.go:365] acquiring machines lock for newest-cni-745275: {Name:mk35c7e61c7424729701ed925d6243da31c48484 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0109 00:29:11.780874  457766 start.go:369] acquired machines lock for "newest-cni-745275" in 24.81µs
	I0109 00:29:11.780899  457766 start.go:93] Provisioning new machine with config: &{Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:29:11.780969  457766 start.go:125] createHost starting for "" (driver="kvm2")
	I0109 00:29:11.782998  457766 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0109 00:29:11.783142  457766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0109 00:29:11.783177  457766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0109 00:29:11.801506  457766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0109 00:29:11.802033  457766 main.go:141] libmachine: () Calling .GetVersion
	I0109 00:29:11.802719  457766 main.go:141] libmachine: Using API Version  1
	I0109 00:29:11.802750  457766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0109 00:29:11.803299  457766 main.go:141] libmachine: () Calling .GetMachineName
	I0109 00:29:11.803551  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:11.803725  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:11.803909  457766 start.go:159] libmachine.API.Create for "newest-cni-745275" (driver="kvm2")
	I0109 00:29:11.803941  457766 client.go:168] LocalClient.Create starting
	I0109 00:29:11.804008  457766 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem
	I0109 00:29:11.804041  457766 main.go:141] libmachine: Decoding PEM data...
	I0109 00:29:11.804055  457766 main.go:141] libmachine: Parsing certificate...
	I0109 00:29:11.804123  457766 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem
	I0109 00:29:11.804144  457766 main.go:141] libmachine: Decoding PEM data...
	I0109 00:29:11.804153  457766 main.go:141] libmachine: Parsing certificate...
	I0109 00:29:11.804168  457766 main.go:141] libmachine: Running pre-create checks...
	I0109 00:29:11.804179  457766 main.go:141] libmachine: (newest-cni-745275) Calling .PreCreateCheck
	I0109 00:29:11.804568  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:29:11.805090  457766 main.go:141] libmachine: Creating machine...
	I0109 00:29:11.805105  457766 main.go:141] libmachine: (newest-cni-745275) Calling .Create
	I0109 00:29:11.805267  457766 main.go:141] libmachine: (newest-cni-745275) Creating KVM machine...
	I0109 00:29:11.806298  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found existing default KVM network
	I0109 00:29:11.807865  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.807663  457807 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0b:0a:00} reservation:<nil>}
	I0109 00:29:11.808753  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.808667  457807 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ca:6a:ce} reservation:<nil>}
	I0109 00:29:11.809620  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.809526  457807 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:71:02:68} reservation:<nil>}
	I0109 00:29:11.810855  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.810788  457807 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000304eb0}
	I0109 00:29:11.816157  457766 main.go:141] libmachine: (newest-cni-745275) DBG | trying to create private KVM network mk-newest-cni-745275 192.168.72.0/24...
	I0109 00:29:11.905107  457766 main.go:141] libmachine: (newest-cni-745275) DBG | private KVM network mk-newest-cni-745275 192.168.72.0/24 created
	I0109 00:29:11.905148  457766 main.go:141] libmachine: (newest-cni-745275) Setting up store path in /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275 ...
	I0109 00:29:11.905161  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:11.905052  457807 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:11.905175  457766 main.go:141] libmachine: (newest-cni-745275) Building disk image from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0109 00:29:11.905263  457766 main.go:141] libmachine: (newest-cni-745275) Downloading /home/jenkins/minikube-integration/17830-399915/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0109 00:29:12.174015  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:12.173876  457807 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa...
	I0109 00:29:12.447386  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:12.447209  457807 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/newest-cni-745275.rawdisk...
	I0109 00:29:12.447429  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Writing magic tar header
	I0109 00:29:12.447522  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Writing SSH key tar header
	I0109 00:29:12.447655  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:12.447569  457807 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275 ...
	I0109 00:29:12.447748  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275
	I0109 00:29:12.448081  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275 (perms=drwx------)
	I0109 00:29:12.448115  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube/machines
	I0109 00:29:12.448130  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube/machines (perms=drwxr-xr-x)
	I0109 00:29:12.448150  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915/.minikube (perms=drwxr-xr-x)
	I0109 00:29:12.448166  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration/17830-399915 (perms=drwxrwxr-x)
	I0109 00:29:12.448178  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915/.minikube
	I0109 00:29:12.448197  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0109 00:29:12.448213  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-399915
	I0109 00:29:12.448227  457766 main.go:141] libmachine: (newest-cni-745275) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0109 00:29:12.448242  457766 main.go:141] libmachine: (newest-cni-745275) Creating domain...
	I0109 00:29:12.448254  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0109 00:29:12.448272  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home/jenkins
	I0109 00:29:12.448284  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Checking permissions on dir: /home
	I0109 00:29:12.448300  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Skipping /home - not owner
	I0109 00:29:12.449799  457766 main.go:141] libmachine: (newest-cni-745275) define libvirt domain using xml: 
	I0109 00:29:12.449822  457766 main.go:141] libmachine: (newest-cni-745275) <domain type='kvm'>
	I0109 00:29:12.449859  457766 main.go:141] libmachine: (newest-cni-745275)   <name>newest-cni-745275</name>
	I0109 00:29:12.449886  457766 main.go:141] libmachine: (newest-cni-745275)   <memory unit='MiB'>2200</memory>
	I0109 00:29:12.449895  457766 main.go:141] libmachine: (newest-cni-745275)   <vcpu>2</vcpu>
	I0109 00:29:12.449900  457766 main.go:141] libmachine: (newest-cni-745275)   <features>
	I0109 00:29:12.449907  457766 main.go:141] libmachine: (newest-cni-745275)     <acpi/>
	I0109 00:29:12.449914  457766 main.go:141] libmachine: (newest-cni-745275)     <apic/>
	I0109 00:29:12.449920  457766 main.go:141] libmachine: (newest-cni-745275)     <pae/>
	I0109 00:29:12.449928  457766 main.go:141] libmachine: (newest-cni-745275)     
	I0109 00:29:12.449934  457766 main.go:141] libmachine: (newest-cni-745275)   </features>
	I0109 00:29:12.449942  457766 main.go:141] libmachine: (newest-cni-745275)   <cpu mode='host-passthrough'>
	I0109 00:29:12.449954  457766 main.go:141] libmachine: (newest-cni-745275)   
	I0109 00:29:12.449970  457766 main.go:141] libmachine: (newest-cni-745275)   </cpu>
	I0109 00:29:12.449983  457766 main.go:141] libmachine: (newest-cni-745275)   <os>
	I0109 00:29:12.449994  457766 main.go:141] libmachine: (newest-cni-745275)     <type>hvm</type>
	I0109 00:29:12.450004  457766 main.go:141] libmachine: (newest-cni-745275)     <boot dev='cdrom'/>
	I0109 00:29:12.450009  457766 main.go:141] libmachine: (newest-cni-745275)     <boot dev='hd'/>
	I0109 00:29:12.450018  457766 main.go:141] libmachine: (newest-cni-745275)     <bootmenu enable='no'/>
	I0109 00:29:12.450023  457766 main.go:141] libmachine: (newest-cni-745275)   </os>
	I0109 00:29:12.450035  457766 main.go:141] libmachine: (newest-cni-745275)   <devices>
	I0109 00:29:12.450050  457766 main.go:141] libmachine: (newest-cni-745275)     <disk type='file' device='cdrom'>
	I0109 00:29:12.450070  457766 main.go:141] libmachine: (newest-cni-745275)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/boot2docker.iso'/>
	I0109 00:29:12.450083  457766 main.go:141] libmachine: (newest-cni-745275)       <target dev='hdc' bus='scsi'/>
	I0109 00:29:12.450106  457766 main.go:141] libmachine: (newest-cni-745275)       <readonly/>
	I0109 00:29:12.450118  457766 main.go:141] libmachine: (newest-cni-745275)     </disk>
	I0109 00:29:12.450155  457766 main.go:141] libmachine: (newest-cni-745275)     <disk type='file' device='disk'>
	I0109 00:29:12.450182  457766 main.go:141] libmachine: (newest-cni-745275)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0109 00:29:12.450213  457766 main.go:141] libmachine: (newest-cni-745275)       <source file='/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/newest-cni-745275.rawdisk'/>
	I0109 00:29:12.450231  457766 main.go:141] libmachine: (newest-cni-745275)       <target dev='hda' bus='virtio'/>
	I0109 00:29:12.450242  457766 main.go:141] libmachine: (newest-cni-745275)     </disk>
	I0109 00:29:12.450252  457766 main.go:141] libmachine: (newest-cni-745275)     <interface type='network'>
	I0109 00:29:12.450264  457766 main.go:141] libmachine: (newest-cni-745275)       <source network='mk-newest-cni-745275'/>
	I0109 00:29:12.450273  457766 main.go:141] libmachine: (newest-cni-745275)       <model type='virtio'/>
	I0109 00:29:12.450290  457766 main.go:141] libmachine: (newest-cni-745275)     </interface>
	I0109 00:29:12.450299  457766 main.go:141] libmachine: (newest-cni-745275)     <interface type='network'>
	I0109 00:29:12.450309  457766 main.go:141] libmachine: (newest-cni-745275)       <source network='default'/>
	I0109 00:29:12.450319  457766 main.go:141] libmachine: (newest-cni-745275)       <model type='virtio'/>
	I0109 00:29:12.450335  457766 main.go:141] libmachine: (newest-cni-745275)     </interface>
	I0109 00:29:12.450346  457766 main.go:141] libmachine: (newest-cni-745275)     <serial type='pty'>
	I0109 00:29:12.450359  457766 main.go:141] libmachine: (newest-cni-745275)       <target port='0'/>
	I0109 00:29:12.450370  457766 main.go:141] libmachine: (newest-cni-745275)     </serial>
	I0109 00:29:12.450383  457766 main.go:141] libmachine: (newest-cni-745275)     <console type='pty'>
	I0109 00:29:12.450393  457766 main.go:141] libmachine: (newest-cni-745275)       <target type='serial' port='0'/>
	I0109 00:29:12.450411  457766 main.go:141] libmachine: (newest-cni-745275)     </console>
	I0109 00:29:12.450420  457766 main.go:141] libmachine: (newest-cni-745275)     <rng model='virtio'>
	I0109 00:29:12.450435  457766 main.go:141] libmachine: (newest-cni-745275)       <backend model='random'>/dev/random</backend>
	I0109 00:29:12.450446  457766 main.go:141] libmachine: (newest-cni-745275)     </rng>
	I0109 00:29:12.450456  457766 main.go:141] libmachine: (newest-cni-745275)     
	I0109 00:29:12.450465  457766 main.go:141] libmachine: (newest-cni-745275)     
	I0109 00:29:12.450475  457766 main.go:141] libmachine: (newest-cni-745275)   </devices>
	I0109 00:29:12.450487  457766 main.go:141] libmachine: (newest-cni-745275) </domain>
	I0109 00:29:12.450499  457766 main.go:141] libmachine: (newest-cni-745275) 
	I0109 00:29:12.455338  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:55:63:71 in network default
	I0109 00:29:12.456135  457766 main.go:141] libmachine: (newest-cni-745275) Ensuring networks are active...
	I0109 00:29:12.456162  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:12.456921  457766 main.go:141] libmachine: (newest-cni-745275) Ensuring network default is active
	I0109 00:29:12.457333  457766 main.go:141] libmachine: (newest-cni-745275) Ensuring network mk-newest-cni-745275 is active
	I0109 00:29:12.458065  457766 main.go:141] libmachine: (newest-cni-745275) Getting domain xml...
	I0109 00:29:12.459025  457766 main.go:141] libmachine: (newest-cni-745275) Creating domain...
	I0109 00:29:13.885256  457766 main.go:141] libmachine: (newest-cni-745275) Waiting to get IP...
	I0109 00:29:13.886297  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:13.886750  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:13.886893  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:13.886752  457807 retry.go:31] will retry after 257.298601ms: waiting for machine to come up
	I0109 00:29:14.145529  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:14.146148  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:14.146205  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:14.146086  457807 retry.go:31] will retry after 364.099957ms: waiting for machine to come up
	I0109 00:29:14.511860  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:14.512383  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:14.512415  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:14.512329  457807 retry.go:31] will retry after 457.359198ms: waiting for machine to come up
	I0109 00:29:14.970920  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:14.971439  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:14.971527  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:14.971440  457807 retry.go:31] will retry after 515.451223ms: waiting for machine to come up
	I0109 00:29:15.488173  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:15.488716  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:15.488747  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:15.488663  457807 retry.go:31] will retry after 493.074085ms: waiting for machine to come up
	I0109 00:29:15.983436  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:15.983927  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:15.983960  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:15.983857  457807 retry.go:31] will retry after 916.090818ms: waiting for machine to come up
	I0109 00:29:16.901416  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:16.901879  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:16.901907  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:16.901829  457807 retry.go:31] will retry after 1.157895775s: waiting for machine to come up
	I0109 00:29:18.061691  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:18.062252  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:18.062277  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:18.062198  457807 retry.go:31] will retry after 1.397423702s: waiting for machine to come up
	I0109 00:29:19.461173  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:19.461627  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:19.461651  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:19.461581  457807 retry.go:31] will retry after 1.332950781s: waiting for machine to come up
	I0109 00:29:20.796107  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:20.796540  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:20.796574  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:20.796482  457807 retry.go:31] will retry after 2.241146328s: waiting for machine to come up
	I0109 00:29:23.039833  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:23.040390  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:23.040424  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:23.040328  457807 retry.go:31] will retry after 2.022201691s: waiting for machine to come up
	I0109 00:29:25.064723  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:25.065170  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:25.065201  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:25.065127  457807 retry.go:31] will retry after 3.398624103s: waiting for machine to come up
	I0109 00:29:28.465932  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:28.466445  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:28.466474  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:28.466413  457807 retry.go:31] will retry after 3.878176349s: waiting for machine to come up
	I0109 00:29:32.346143  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:32.346822  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find current IP address of domain newest-cni-745275 in network mk-newest-cni-745275
	I0109 00:29:32.346850  457766 main.go:141] libmachine: (newest-cni-745275) DBG | I0109 00:29:32.346770  457807 retry.go:31] will retry after 5.266293301s: waiting for machine to come up
	I0109 00:29:37.614760  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:37.615259  457766 main.go:141] libmachine: (newest-cni-745275) Found IP for machine: 192.168.72.107
	I0109 00:29:37.615281  457766 main.go:141] libmachine: (newest-cni-745275) Reserving static IP address...
	I0109 00:29:37.615291  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has current primary IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:37.615715  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find host DHCP lease matching {name: "newest-cni-745275", mac: "52:54:00:41:55:15", ip: "192.168.72.107"} in network mk-newest-cni-745275
	I0109 00:29:37.697767  457766 main.go:141] libmachine: (newest-cni-745275) Reserved static IP address: 192.168.72.107
	I0109 00:29:37.697805  457766 main.go:141] libmachine: (newest-cni-745275) Waiting for SSH to be available...
	I0109 00:29:37.697822  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Getting to WaitForSSH function...
	I0109 00:29:37.700543  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:37.700933  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275
	I0109 00:29:37.700974  457766 main.go:141] libmachine: (newest-cni-745275) DBG | unable to find defined IP address of network mk-newest-cni-745275 interface with MAC address 52:54:00:41:55:15
	I0109 00:29:37.701130  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH client type: external
	I0109 00:29:37.701158  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa (-rw-------)
	I0109 00:29:37.701202  457766 main.go:141] libmachine: (newest-cni-745275) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:29:37.701235  457766 main.go:141] libmachine: (newest-cni-745275) DBG | About to run SSH command:
	I0109 00:29:37.701260  457766 main.go:141] libmachine: (newest-cni-745275) DBG | exit 0
	I0109 00:29:37.705117  457766 main.go:141] libmachine: (newest-cni-745275) DBG | SSH cmd err, output: exit status 255: 
	I0109 00:29:37.705145  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0109 00:29:37.705157  457766 main.go:141] libmachine: (newest-cni-745275) DBG | command : exit 0
	I0109 00:29:37.705175  457766 main.go:141] libmachine: (newest-cni-745275) DBG | err     : exit status 255
	I0109 00:29:37.705190  457766 main.go:141] libmachine: (newest-cni-745275) DBG | output  : 
	I0109 00:29:40.707273  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Getting to WaitForSSH function...
	I0109 00:29:40.709962  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.710410  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:40.710444  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.710611  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH client type: external
	I0109 00:29:40.710635  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa (-rw-------)
	I0109 00:29:40.710667  457766 main.go:141] libmachine: (newest-cni-745275) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0109 00:29:40.710682  457766 main.go:141] libmachine: (newest-cni-745275) DBG | About to run SSH command:
	I0109 00:29:40.710730  457766 main.go:141] libmachine: (newest-cni-745275) DBG | exit 0
	I0109 00:29:40.807441  457766 main.go:141] libmachine: (newest-cni-745275) DBG | SSH cmd err, output: <nil>: 
	I0109 00:29:40.807710  457766 main.go:141] libmachine: (newest-cni-745275) KVM machine creation complete!
	I0109 00:29:40.808079  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:29:40.808688  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:40.808920  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:40.809099  457766 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0109 00:29:40.809117  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetState
	I0109 00:29:40.810518  457766 main.go:141] libmachine: Detecting operating system of created instance...
	I0109 00:29:40.810540  457766 main.go:141] libmachine: Waiting for SSH to be available...
	I0109 00:29:40.810550  457766 main.go:141] libmachine: Getting to WaitForSSH function...
	I0109 00:29:40.810560  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:40.812874  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.813307  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:40.813336  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.813505  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:40.813684  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.813871  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.814046  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:40.814231  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:40.814616  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:40.814636  457766 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0109 00:29:40.947086  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:29:40.947115  457766 main.go:141] libmachine: Detecting the provisioner...
	I0109 00:29:40.947128  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:40.950358  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.950703  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:40.950734  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:40.950920  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:40.951166  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.951378  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:40.951574  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:40.951725  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:40.952096  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:40.952111  457766 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0109 00:29:41.084522  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0109 00:29:41.084650  457766 main.go:141] libmachine: found compatible host: buildroot
	I0109 00:29:41.084661  457766 main.go:141] libmachine: Provisioning with buildroot...
	I0109 00:29:41.084669  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:41.084970  457766 buildroot.go:166] provisioning hostname "newest-cni-745275"
	I0109 00:29:41.084999  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:41.085253  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.088254  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.088619  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.088655  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.088827  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.089025  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.089274  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.089398  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.089634  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:41.090013  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:41.090033  457766 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-745275 && echo "newest-cni-745275" | sudo tee /etc/hostname
	I0109 00:29:41.236695  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-745275
	
	I0109 00:29:41.236723  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.239668  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.240094  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.240125  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.240267  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.240502  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.240741  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.240920  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.241115  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:41.241494  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:41.241515  457766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-745275' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-745275/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-745275' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:29:41.380280  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:29:41.380320  457766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-399915/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-399915/.minikube}
	I0109 00:29:41.380351  457766 buildroot.go:174] setting up certificates
	I0109 00:29:41.380364  457766 provision.go:83] configureAuth start
	I0109 00:29:41.380383  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetMachineName
	I0109 00:29:41.380753  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:41.383713  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.384169  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.384199  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.384384  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.386919  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.387253  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.387288  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.387451  457766 provision.go:138] copyHostCerts
	I0109 00:29:41.387522  457766 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem, removing ...
	I0109 00:29:41.387535  457766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem
	I0109 00:29:41.387616  457766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/ca.pem (1082 bytes)
	I0109 00:29:41.387729  457766 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem, removing ...
	I0109 00:29:41.387741  457766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem
	I0109 00:29:41.387776  457766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/cert.pem (1123 bytes)
	I0109 00:29:41.387905  457766 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem, removing ...
	I0109 00:29:41.387919  457766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem
	I0109 00:29:41.387946  457766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-399915/.minikube/key.pem (1679 bytes)
	I0109 00:29:41.388025  457766 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem org=jenkins.newest-cni-745275 san=[192.168.72.107 192.168.72.107 localhost 127.0.0.1 minikube newest-cni-745275]
	I0109 00:29:41.559865  457766 provision.go:172] copyRemoteCerts
	I0109 00:29:41.559961  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:29:41.560000  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.563118  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.563527  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.563560  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.563751  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.563963  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.564157  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.564319  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:41.662599  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:29:41.687491  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0109 00:29:41.712388  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:29:41.735693  457766 provision.go:86] duration metric: configureAuth took 355.307403ms
	I0109 00:29:41.735746  457766 buildroot.go:189] setting minikube options for container-runtime
	I0109 00:29:41.735982  457766 config.go:182] Loaded profile config "newest-cni-745275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0109 00:29:41.736141  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:41.739339  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.739733  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:41.739782  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:41.739997  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:41.740220  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.740424  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:41.740616  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:41.740790  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:41.741147  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:41.741164  457766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:29:42.087109  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:29:42.087137  457766 main.go:141] libmachine: Checking connection to Docker...
	I0109 00:29:42.087146  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetURL
	I0109 00:29:42.088585  457766 main.go:141] libmachine: (newest-cni-745275) DBG | Using libvirt version 6000000
	I0109 00:29:42.091535  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.091932  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.092002  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.092306  457766 main.go:141] libmachine: Docker is up and running!
	I0109 00:29:42.092323  457766 main.go:141] libmachine: Reticulating splines...
	I0109 00:29:42.092330  457766 client.go:171] LocalClient.Create took 30.288379146s
	I0109 00:29:42.092353  457766 start.go:167] duration metric: libmachine.API.Create for "newest-cni-745275" took 30.288444437s
	I0109 00:29:42.092367  457766 start.go:300] post-start starting for "newest-cni-745275" (driver="kvm2")
	I0109 00:29:42.092385  457766 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:29:42.092422  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.092673  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:29:42.092703  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.095192  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.095710  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.095748  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.095999  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.096219  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.096385  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.096612  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:42.197392  457766 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:29:42.201898  457766 info.go:137] Remote host: Buildroot 2021.02.12
	I0109 00:29:42.201924  457766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/addons for local assets ...
	I0109 00:29:42.202008  457766 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-399915/.minikube/files for local assets ...
	I0109 00:29:42.202099  457766 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem -> 4070942.pem in /etc/ssl/certs
	I0109 00:29:42.202191  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:29:42.212292  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:29:42.235838  457766 start.go:303] post-start completed in 143.455436ms
	I0109 00:29:42.235889  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetConfigRaw
	I0109 00:29:42.236504  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:42.239467  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.239895  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.239929  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.240222  457766 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/config.json ...
	I0109 00:29:42.240442  457766 start.go:128] duration metric: createHost completed in 30.459457123s
	I0109 00:29:42.240510  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.243202  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.243645  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.243674  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.243768  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.243961  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.244120  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.244288  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.244453  457766 main.go:141] libmachine: Using SSH client type: native
	I0109 00:29:42.244790  457766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.107 22 <nil> <nil>}
	I0109 00:29:42.244803  457766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0109 00:29:42.380213  457766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704760182.368757092
	
	I0109 00:29:42.380244  457766 fix.go:206] guest clock: 1704760182.368757092
	I0109 00:29:42.380255  457766 fix.go:219] Guest: 2024-01-09 00:29:42.368757092 +0000 UTC Remote: 2024-01-09 00:29:42.240492728 +0000 UTC m=+30.609810626 (delta=128.264364ms)
	I0109 00:29:42.380303  457766 fix.go:190] guest clock delta is within tolerance: 128.264364ms
	I0109 00:29:42.380315  457766 start.go:83] releasing machines lock for "newest-cni-745275", held for 30.599428284s
	I0109 00:29:42.380348  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.380674  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:42.383692  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.384056  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.384083  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.384304  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.384839  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.385054  457766 main.go:141] libmachine: (newest-cni-745275) Calling .DriverName
	I0109 00:29:42.385152  457766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:29:42.385216  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.385292  457766 ssh_runner.go:195] Run: cat /version.json
	I0109 00:29:42.385322  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHHostname
	I0109 00:29:42.387742  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388077  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.388112  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388133  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388349  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.388531  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.388664  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:42.388675  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.388686  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:42.388795  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHPort
	I0109 00:29:42.388861  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:42.388964  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHKeyPath
	I0109 00:29:42.389119  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetSSHUsername
	I0109 00:29:42.389265  457766 sshutil.go:53] new ssh client: &{IP:192.168.72.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/newest-cni-745275/id_rsa Username:docker}
	I0109 00:29:42.516788  457766 ssh_runner.go:195] Run: systemctl --version
	I0109 00:29:42.522882  457766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:29:42.692632  457766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0109 00:29:42.699734  457766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0109 00:29:42.699838  457766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:29:42.716543  457766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:29:42.716573  457766 start.go:475] detecting cgroup driver to use...
	I0109 00:29:42.716655  457766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:29:42.730924  457766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:29:42.744175  457766 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:29:42.744247  457766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:29:42.762474  457766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:29:42.777122  457766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:29:42.883698  457766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:29:43.008307  457766 docker.go:219] disabling docker service ...
	I0109 00:29:43.008407  457766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:29:43.022895  457766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:29:43.037037  457766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:29:43.172277  457766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:29:43.296071  457766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:29:43.310145  457766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:29:43.328944  457766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:29:43.329010  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.339234  457766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:29:43.339319  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.349544  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.360020  457766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:29:43.370015  457766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:29:43.381521  457766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:29:43.390544  457766 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0109 00:29:43.390612  457766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0109 00:29:43.402554  457766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:29:43.411937  457766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:29:43.512803  457766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:29:43.699559  457766 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:29:43.699691  457766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:29:43.705550  457766 start.go:543] Will wait 60s for crictl version
	I0109 00:29:43.705617  457766 ssh_runner.go:195] Run: which crictl
	I0109 00:29:43.709699  457766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:29:43.756776  457766 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0109 00:29:43.756890  457766 ssh_runner.go:195] Run: crio --version
	I0109 00:29:43.813309  457766 ssh_runner.go:195] Run: crio --version
	I0109 00:29:43.868764  457766 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0109 00:29:43.870210  457766 main.go:141] libmachine: (newest-cni-745275) Calling .GetIP
	I0109 00:29:43.873161  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:43.873586  457766 main.go:141] libmachine: (newest-cni-745275) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:55:15", ip: ""} in network mk-newest-cni-745275: {Iface:virbr2 ExpiryTime:2024-01-09 01:29:28 +0000 UTC Type:0 Mac:52:54:00:41:55:15 Iaid: IPaddr:192.168.72.107 Prefix:24 Hostname:newest-cni-745275 Clientid:01:52:54:00:41:55:15}
	I0109 00:29:43.873627  457766 main.go:141] libmachine: (newest-cni-745275) DBG | domain newest-cni-745275 has defined IP address 192.168.72.107 and MAC address 52:54:00:41:55:15 in network mk-newest-cni-745275
	I0109 00:29:43.873791  457766 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0109 00:29:43.878461  457766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:29:43.890679  457766 localpath.go:92] copying /home/jenkins/minikube-integration/17830-399915/.minikube/client.crt -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.crt
	I0109 00:29:43.890881  457766 localpath.go:117] copying /home/jenkins/minikube-integration/17830-399915/.minikube/client.key -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.key
	I0109 00:29:43.892918  457766 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0109 00:29:43.894316  457766 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:29:43.894390  457766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:29:43.930475  457766 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0109 00:29:43.930542  457766 ssh_runner.go:195] Run: which lz4
	I0109 00:29:43.935014  457766 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:29:43.939648  457766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:29:43.939678  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401795125 bytes)
	I0109 00:29:45.521781  457766 crio.go:444] Took 1.586795 seconds to copy over tarball
	I0109 00:29:45.521895  457766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:29:48.387678  457766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.865742394s)
	I0109 00:29:48.387712  457766 crio.go:451] Took 2.865896 seconds to extract the tarball
	I0109 00:29:48.387725  457766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:29:48.427863  457766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:29:48.509622  457766 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:29:48.509655  457766 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:29:48.509806  457766 ssh_runner.go:195] Run: crio config
	I0109 00:29:48.569393  457766 cni.go:84] Creating CNI manager for ""
	I0109 00:29:48.569416  457766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0109 00:29:48.569444  457766 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0109 00:29:48.569468  457766 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.107 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-745275 NodeName:newest-cni-745275 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.72.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:29:48.569616  457766 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-745275"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:29:48.569722  457766 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-745275 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:29:48.569794  457766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0109 00:29:48.579381  457766 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:29:48.579468  457766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:29:48.588465  457766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0109 00:29:48.606489  457766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0109 00:29:48.624398  457766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0109 00:29:48.642459  457766 ssh_runner.go:195] Run: grep 192.168.72.107	control-plane.minikube.internal$ /etc/hosts
	I0109 00:29:48.646734  457766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:29:48.659922  457766 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275 for IP: 192.168.72.107
	I0109 00:29:48.659967  457766 certs.go:190] acquiring lock for shared ca certs: {Name:mk5a1494d459422b3dc06160975d7eac43dfb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.660171  457766 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key
	I0109 00:29:48.660239  457766 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key
	I0109 00:29:48.660342  457766 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/client.key
	I0109 00:29:48.660365  457766 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713
	I0109 00:29:48.660381  457766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713 with IP's: [192.168.72.107 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:29:48.784020  457766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713 ...
	I0109 00:29:48.784056  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713: {Name:mk8e582bd51932418656f089c541a853f2436e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.784249  457766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713 ...
	I0109 00:29:48.784266  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713: {Name:mk8b17554733fece7685e52b093a0cf81bbabb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.784367  457766 certs.go:337] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt.52b42713 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt
	I0109 00:29:48.784452  457766 certs.go:341] copying /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key.52b42713 -> /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key
	I0109 00:29:48.784532  457766 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key
	I0109 00:29:48.784558  457766 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt with IP's: []
	I0109 00:29:48.925964  457766 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt ...
	I0109 00:29:48.925996  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt: {Name:mkf40339ad77247d160ed5370260f9070f03d05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.926200  457766 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key ...
	I0109 00:29:48.926224  457766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key: {Name:mk9d33377cda8fb82a9f36198a589923454968a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:29:48.926441  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem (1338 bytes)
	W0109 00:29:48.926482  457766 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094_empty.pem, impossibly tiny 0 bytes
	I0109 00:29:48.926499  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca-key.pem (1675 bytes)
	I0109 00:29:48.926527  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:29:48.926550  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:29:48.926586  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/certs/home/jenkins/minikube-integration/17830-399915/.minikube/certs/key.pem (1679 bytes)
	I0109 00:29:48.926644  457766 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem (1708 bytes)
	I0109 00:29:48.927432  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:29:48.956801  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:29:48.981539  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:29:49.005384  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/newest-cni-745275/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:29:49.030087  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:29:49.054961  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0109 00:29:49.077837  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:29:49.102386  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:29:49.125332  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/certs/407094.pem --> /usr/share/ca-certificates/407094.pem (1338 bytes)
	I0109 00:29:49.149956  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/ssl/certs/4070942.pem --> /usr/share/ca-certificates/4070942.pem (1708 bytes)
	I0109 00:29:49.175824  457766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-399915/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:29:49.199449  457766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:29:49.216134  457766 ssh_runner.go:195] Run: openssl version
	I0109 00:29:49.222359  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/407094.pem && ln -fs /usr/share/ca-certificates/407094.pem /etc/ssl/certs/407094.pem"
	I0109 00:29:49.233099  457766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/407094.pem
	I0109 00:29:49.237633  457766 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 23:02 /usr/share/ca-certificates/407094.pem
	I0109 00:29:49.237680  457766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/407094.pem
	I0109 00:29:49.243173  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/407094.pem /etc/ssl/certs/51391683.0"
	I0109 00:29:49.254739  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4070942.pem && ln -fs /usr/share/ca-certificates/4070942.pem /etc/ssl/certs/4070942.pem"
	I0109 00:29:49.267126  457766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4070942.pem
	I0109 00:29:49.272393  457766 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 23:02 /usr/share/ca-certificates/4070942.pem
	I0109 00:29:49.272457  457766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4070942.pem
	I0109 00:29:49.278065  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4070942.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:29:49.288339  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:29:49.300031  457766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:29:49.305040  457766 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:29:49.305119  457766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:29:49.310769  457766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:29:49.321658  457766 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:29:49.326066  457766 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:29:49.326169  457766 kubeadm.go:404] StartCluster: {Name:newest-cni-745275 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-745275 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:29:49.326254  457766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:29:49.326308  457766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:29:49.365875  457766 cri.go:89] found id: ""
	I0109 00:29:49.365987  457766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:29:49.376570  457766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:29:49.386955  457766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:29:49.397047  457766 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:29:49.397109  457766 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0109 00:29:49.502770  457766 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0109 00:29:49.502915  457766 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:29:49.780643  457766 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:29:49.780785  457766 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:29:49.780879  457766 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:29:50.021082  457766 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:29:50.242314  457766 out.go:204]   - Generating certificates and keys ...
	I0109 00:29:50.242428  457766 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:29:50.242543  457766 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:29:50.242633  457766 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:29:50.460477  457766 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:29:50.852988  457766 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:29:51.099764  457766 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:29:51.570378  457766 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:29:51.570641  457766 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-745275] and IPs [192.168.72.107 127.0.0.1 ::1]
	I0109 00:29:51.640432  457766 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:29:51.640644  457766 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-745275] and IPs [192.168.72.107 127.0.0.1 ::1]
	I0109 00:29:51.742035  457766 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:29:52.316853  457766 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:29:52.641755  457766 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:29:52.642077  457766 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:29:52.694782  457766 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:29:53.064310  457766 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0109 00:29:53.256345  457766 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:29:53.572594  457766 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:29:54.164580  457766 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:29:54.165506  457766 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:29:54.170678  457766 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-09 00:09:20 UTC, ends at Tue 2024-01-09 00:29:55 UTC. --
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.063282166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760195063269362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=634c47d6-232f-4de9-9bcf-9c873eb84a7a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.064032992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3f1e48e4-812e-4817-b2c2-da6ebb7a5ac7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.064104346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3f1e48e4-812e-4817-b2c2-da6ebb7a5ac7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.064752223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3f1e48e4-812e-4817-b2c2-da6ebb7a5ac7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.108792703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0a5aa0b4-a8c4-4ca9-bd43-f0b8e093bb70 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.108882543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0a5aa0b4-a8c4-4ca9-bd43-f0b8e093bb70 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.110270096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ae389587-aa58-4719-b491-1d6c0611944c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.110565514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760195110554016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ae389587-aa58-4719-b491-1d6c0611944c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.111265753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eab26306-6a7d-496b-98c5-093e24927643 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.111311588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eab26306-6a7d-496b-98c5-093e24927643 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.111521741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eab26306-6a7d-496b-98c5-093e24927643 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.164257236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ea783ea8-83e5-4c3c-af7d-6d2b19a1268d name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.164329447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ea783ea8-83e5-4c3c-af7d-6d2b19a1268d name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.166204462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=49cf77d9-c10e-4e7d-b1ab-beff3336e4ea name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.166639102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760195166620758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=49cf77d9-c10e-4e7d-b1ab-beff3336e4ea name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.167592692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb7caef1-58ec-4353-8d3d-4aaa6635f072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.167643274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb7caef1-58ec-4353-8d3d-4aaa6635f072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.167877712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb7caef1-58ec-4353-8d3d-4aaa6635f072 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.207700692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=28dea9fa-ef09-4ec8-8392-e6caf34acd15 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.207771324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=28dea9fa-ef09-4ec8-8392-e6caf34acd15 name=/runtime.v1.RuntimeService/Version
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.209055672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=310829ce-5e6e-40fc-a4f7-c1eb563082d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.209426858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704760195209411440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=310829ce-5e6e-40fc-a4f7-c1eb563082d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.210056244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e17795a3-3045-4b58-97c3-f2079648b439 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.210100660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e17795a3-3045-4b58-97c3-f2079648b439 name=/runtime.v1.RuntimeService/ListContainers
	Jan 09 00:29:55 no-preload-378213 crio[712]: time="2024-01-09 00:29:55.210256701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62,PodSandboxId:28e8d7228f95c6b41ea91e558ac817e234f40ce2785f5519eac3a5dff1e197fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704759334911088379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95fe5038-977e-430a-8bda-42557c536114,},Annotations:map[string]string{io.kubernetes.container.hash: ac879cd9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8,PodSandboxId:4021cd157f894dd04a11ec2cab1c51d6133811d60a1c3f4fe781c0aae33cad13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704759334579481564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ztvgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dca02e6-8b8c-491f-a689-fb9b51c5f88e,},Annotations:map[string]string{io.kubernetes.container.hash: c48905cf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b,PodSandboxId:c21d689adefa6932162ff3b64e16541f0cb1322428a8349cd752df76261238e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704759333453334502,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vnf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1a87e8a6-55b5-4579-aa4e-1a20be126ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 623b19de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b,PodSandboxId:d9ded595452b7d188f1ee2898d34acaa507c918ffd2a008aa213cc02bb8c78f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704759311401457197,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9cfef17d11830a8ed29b7b05a894b9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a72add8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd,PodSandboxId:7ca36c7ddc5b758d6c724cbf6032fcd892e93c38fd282ed3080b0cb8d8628772,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704759311305257087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb34a70bb18e99dfe7af59f87c242f79,},Annotations:map
[string]string{io.kubernetes.container.hash: 762d1c1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a,PodSandboxId:72a56456d833bbc028c79b3302640f9b4a8e9d8504ac95ed3bf83d56d468e953,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704759311275577050,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a10b7db81804221180b16bf73df17840,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24,PodSandboxId:bc6ea128fce71ae85009978a5f245db41b0cea71eb90597cec53416ac5c7cd45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704759311028914712,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-378213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52dd6fed1fd30892e205b9a6becc8177,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e17795a3-3045-4b58-97c3-f2079648b439 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9ddb767a3680b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   28e8d7228f95c       storage-provisioner
	16e8e419faf28       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   4021cd157f894       coredns-76f75df574-ztvgr
	577d39068d7c0       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   c21d689adefa6       kube-proxy-4vnf5
	31914c8452b6b       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   d9ded595452b7       kube-apiserver-no-preload-378213
	3f150bb39755e       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   7ca36c7ddc5b7       etcd-no-preload-378213
	6657ae7032ad4       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   72a56456d833b       kube-scheduler-no-preload-378213
	315a6bb636ced       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   bc6ea128fce71       kube-controller-manager-no-preload-378213
	
	
	==> coredns [16e8e419faf289d6d2fd855489167b3897ab05aec11c4f97cb8b781fc213fdf8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47191 - 39189 "HINFO IN 5390086558276774289.492511922595418031. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.04397005s
	
	
	==> describe nodes <==
	Name:               no-preload-378213
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-378213
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=no-preload-378213
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_15_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:15:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-378213
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:29:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:25:51 +0000   Tue, 09 Jan 2024 00:15:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.62
	  Hostname:    no-preload-378213
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c599400ad65a4458b3dd9b13cea40b29
	  System UUID:                c599400a-d65a-4458-b3dd-9b13cea40b29
	  Boot ID:                    8a539832-187e-4228-8d3f-4c857670d960
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ztvgr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-378213                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-378213             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-378213    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-4vnf5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-378213             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-k426v              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-378213 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-378213 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-378213 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-378213 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-378213 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-378213 event: Registered Node no-preload-378213 in Controller
	
	
	==> dmesg <==
	[Jan 9 00:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070929] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.599910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.462871] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150587] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.440390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.241046] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.122966] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.154374] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.121895] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.270134] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +30.110927] systemd-fstab-generator[1320]: Ignoring "noauto" for root device
	[Jan 9 00:10] kauditd_printk_skb: 5 callbacks suppressed
	[ +27.383710] kauditd_printk_skb: 14 callbacks suppressed
	[Jan 9 00:15] systemd-fstab-generator[3974]: Ignoring "noauto" for root device
	[  +9.804596] systemd-fstab-generator[4306]: Ignoring "noauto" for root device
	[ +14.220585] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [3f150bb39755e0577028ce7eb428665ca415e1f45ce9453a0e757b883d3783cd] <==
	{"level":"info","ts":"2024-01-09T00:15:14.283337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-09T00:15:14.283453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-09T00:15:14.283489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc received MsgPreVoteResp from f42a9b63be5d0edc at term 1"}
	{"level":"info","ts":"2024-01-09T00:15:14.283519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc became candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.283543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc received MsgVoteResp from f42a9b63be5d0edc at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.283584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f42a9b63be5d0edc became leader at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.283611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f42a9b63be5d0edc elected leader f42a9b63be5d0edc at term 2"}
	{"level":"info","ts":"2024-01-09T00:15:14.285438Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f42a9b63be5d0edc","local-member-attributes":"{Name:no-preload-378213 ClientURLs:[https://192.168.61.62:2379]}","request-path":"/0/members/f42a9b63be5d0edc/attributes","cluster-id":"2f37c55d4fac412f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:15:14.285521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:15:14.285915Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:15:14.286098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:15:14.285548Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.285572Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:15:14.287763Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f37c55d4fac412f","local-member-id":"f42a9b63be5d0edc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.287853Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.287898Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:15:14.289507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.62:2379"}
	{"level":"info","ts":"2024-01-09T00:15:14.290182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:25:14.334229Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-01-09T00:25:14.337654Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":683,"took":"2.964518ms","hash":373110865}
	{"level":"info","ts":"2024-01-09T00:25:14.337777Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":373110865,"revision":683,"compact-revision":-1}
	{"level":"info","ts":"2024-01-09T00:29:50.767004Z","caller":"traceutil/trace.go:171","msg":"trace[1861528242] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"433.279625ms","start":"2024-01-09T00:29:50.33363Z","end":"2024-01-09T00:29:50.76691Z","steps":["trace[1861528242] 'process raft request'  (duration: 433.083597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:29:50.768131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:29:50.333616Z","time spent":"433.607474ms","remote":"127.0.0.1:35984","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1150 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-01-09T00:29:50.982768Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.820878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-09T00:29:50.983114Z","caller":"traceutil/trace.go:171","msg":"trace[2121737648] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1151; }","duration":"174.170657ms","start":"2024-01-09T00:29:50.808921Z","end":"2024-01-09T00:29:50.983092Z","steps":["trace[2121737648] 'range keys from in-memory index tree'  (duration: 173.688546ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:29:55 up 20 min,  0 users,  load average: 0.18, 0.22, 0.26
	Linux no-preload-378213 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [31914c8452b6b644a4418a427e830203aa482b9eef3e050c0841aed41127234b] <==
	I0109 00:23:16.823408       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:25:15.824820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:15.824940       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0109 00:25:16.825442       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:16.825585       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:25:16.825623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:25:16.825922       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:25:16.826237       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:25:16.827490       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:26:16.826540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:26:16.826636       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:26:16.826649       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:26:16.827743       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:26:16.827885       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:26:16.828177       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:28:16.827415       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:28:16.827734       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0109 00:28:16.827769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0109 00:28:16.828717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0109 00:28:16.828852       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0109 00:28:16.828886       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [315a6bb636ced0341b3ebf7dab187b7558ec4a4b9ca22ca30130be5614466b24] <==
	I0109 00:24:01.591847       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:24:31.087073       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:24:31.600655       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:25:01.094182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:01.611580       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:25:31.100599       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:25:31.621748       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:01.106816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:01.631536       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:26:31.113764       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:26:31.641662       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0109 00:26:35.557495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="308.125µs"
	I0109 00:26:47.559513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="166.541µs"
	E0109 00:27:01.119660       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:01.651865       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:27:31.125653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:27:31.671651       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:01.131594       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:01.679826       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:28:31.141353       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:28:31.688811       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:01.148114       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:01.697837       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0109 00:29:31.154403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0109 00:29:31.708498       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [577d39068d7c0cb25ac39c2e8160fff6fdeb5a533c15361b8435cf42d656891b] <==
	I0109 00:15:34.651765       1 server_others.go:72] "Using iptables proxy"
	I0109 00:15:34.797170       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.62"]
	I0109 00:15:34.905437       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0109 00:15:34.907791       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0109 00:15:34.907920       1 server_others.go:168] "Using iptables Proxier"
	I0109 00:15:34.912224       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:15:34.912522       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0109 00:15:34.912566       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:15:34.914322       1 config.go:188] "Starting service config controller"
	I0109 00:15:34.914338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:15:34.914351       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:15:34.914354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:15:34.914678       1 config.go:315] "Starting node config controller"
	I0109 00:15:34.914684       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:15:35.015884       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:15:35.016935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:15:35.019546       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6657ae7032ad4c275cee17c495a08f1e23207fc30f006f2e12a2d2a7de88bf8a] <==
	W0109 00:15:16.812633       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:15:16.812920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0109 00:15:16.857613       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:15:16.857737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:15:16.924300       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:16.924370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0109 00:15:16.935062       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:15:16.935137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:15:16.972696       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:16.973072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:15:16.992525       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:15:16.992601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0109 00:15:17.021089       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:15:17.021208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:15:17.032400       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:15:17.032528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:15:17.143862       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:15:17.143927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:15:17.208264       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:15:17.208409       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:15:17.260434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:15:17.260529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:15:17.269601       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:15:17.269699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0109 00:15:20.078014       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-09 00:09:20 UTC, ends at Tue 2024-01-09 00:29:55 UTC. --
	Jan 09 00:27:00 no-preload-378213 kubelet[4312]: E0109 00:27:00.538391    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:13 no-preload-378213 kubelet[4312]: E0109 00:27:13.538647    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]: E0109 00:27:19.651662    4312 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:27:19 no-preload-378213 kubelet[4312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:27:24 no-preload-378213 kubelet[4312]: E0109 00:27:24.537565    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:36 no-preload-378213 kubelet[4312]: E0109 00:27:36.538735    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:27:51 no-preload-378213 kubelet[4312]: E0109 00:27:51.538662    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:06 no-preload-378213 kubelet[4312]: E0109 00:28:06.537683    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:17 no-preload-378213 kubelet[4312]: E0109 00:28:17.539496    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]: E0109 00:28:19.649289    4312 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:28:19 no-preload-378213 kubelet[4312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:28:32 no-preload-378213 kubelet[4312]: E0109 00:28:32.538321    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:28:47 no-preload-378213 kubelet[4312]: E0109 00:28:47.538891    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:29:02 no-preload-378213 kubelet[4312]: E0109 00:29:02.538274    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:29:17 no-preload-378213 kubelet[4312]: E0109 00:29:17.538903    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:29:19 no-preload-378213 kubelet[4312]: E0109 00:29:19.648320    4312 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 09 00:29:19 no-preload-378213 kubelet[4312]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 09 00:29:19 no-preload-378213 kubelet[4312]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 09 00:29:19 no-preload-378213 kubelet[4312]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 09 00:29:30 no-preload-378213 kubelet[4312]: E0109 00:29:30.538677    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	Jan 09 00:29:43 no-preload-378213 kubelet[4312]: E0109 00:29:43.539634    4312 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k426v" podUID="ccc02dbd-f70f-46d3-b39d-0fef97bfa04e"
	
	
	==> storage-provisioner [9ddb767a3680b832c20802773d3175b13ab504bf44fb69f41066fb774bbafe62] <==
	I0109 00:15:35.046880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:15:35.075030       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:15:35.075229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:15:35.094183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:15:35.095343       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-378213_76389a63-633e-4ad4-abf0-2f04a23cd7d6!
	I0109 00:15:35.097350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b6d1faa-1894-47b9-8272-4983df82590d", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-378213_76389a63-633e-4ad4-abf0-2f04a23cd7d6 became leader
	I0109 00:15:35.196401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-378213_76389a63-633e-4ad4-abf0-2f04a23cd7d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-378213 -n no-preload-378213
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-378213 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-k426v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-378213 describe pod metrics-server-57f55c9bc5-k426v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-378213 describe pod metrics-server-57f55c9bc5-k426v: exit status 1 (77.178905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-k426v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-378213 describe pod metrics-server-57f55c9bc5-k426v: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (42.63s)

                                                
                                    

Test pass (241/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.6
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 6.11
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 11.9
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.17
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
26 TestBinaryMirror 0.67
27 TestOffline 96.36
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
32 TestAddons/Setup 216.28
36 TestAddons/parallel/InspektorGadget 12.72
37 TestAddons/parallel/MetricsServer 7.26
38 TestAddons/parallel/HelmTiller 15
40 TestAddons/parallel/CSI 75.1
41 TestAddons/parallel/Headlamp 18.32
42 TestAddons/parallel/CloudSpanner 6.83
43 TestAddons/parallel/LocalPath 49.87
44 TestAddons/parallel/NvidiaDevicePlugin 6.72
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.13
50 TestCertOptions 101.44
51 TestCertExpiration 349.16
53 TestForceSystemdFlag 69.1
54 TestForceSystemdEnv 69.66
56 TestKVMDriverInstallOrUpdate 3.52
60 TestErrorSpam/setup 49.16
61 TestErrorSpam/start 0.46
62 TestErrorSpam/status 0.9
63 TestErrorSpam/pause 1.81
64 TestErrorSpam/unpause 2.02
65 TestErrorSpam/stop 2.32
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 104.57
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 39.93
72 TestFunctional/serial/KubeContext 0.05
73 TestFunctional/serial/KubectlGetPods 0.09
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.72
77 TestFunctional/serial/CacheCmd/cache/add_local 1.66
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.14
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
85 TestFunctional/serial/ExtraConfig 35.29
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.82
88 TestFunctional/serial/LogsFileCmd 1.81
89 TestFunctional/serial/InvalidService 4.53
91 TestFunctional/parallel/ConfigCmd 0.49
92 TestFunctional/parallel/DashboardCmd 16.05
93 TestFunctional/parallel/DryRun 0.36
94 TestFunctional/parallel/InternationalLanguage 0.22
95 TestFunctional/parallel/StatusCmd 1.17
99 TestFunctional/parallel/ServiceCmdConnect 26.77
100 TestFunctional/parallel/AddonsCmd 0.2
101 TestFunctional/parallel/PersistentVolumeClaim 55.6
103 TestFunctional/parallel/SSHCmd 0.5
104 TestFunctional/parallel/CpCmd 1.67
105 TestFunctional/parallel/MySQL 28.21
106 TestFunctional/parallel/FileSync 0.29
107 TestFunctional/parallel/CertSync 1.72
111 TestFunctional/parallel/NodeLabels 0.1
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
115 TestFunctional/parallel/License 0.25
116 TestFunctional/parallel/Version/short 0.11
117 TestFunctional/parallel/Version/components 1.11
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
125 TestFunctional/parallel/MountCmd/any-port 25.03
131 TestFunctional/parallel/MountCmd/specific-port 2.42
132 TestFunctional/parallel/ServiceCmd/DeployApp 10.41
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.13
135 TestFunctional/parallel/ProfileCmd/profile_list 0.57
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.14
142 TestFunctional/parallel/ImageCommands/Setup 1.1
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.13
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
145 TestFunctional/parallel/ServiceCmd/List 1.34
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.4
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.51
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
149 TestFunctional/parallel/ServiceCmd/Format 0.47
150 TestFunctional/parallel/ServiceCmd/URL 0.76
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.07
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.28
155 TestFunctional/delete_addon-resizer_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestIngressAddonLegacy/StartLegacyK8sCluster 109.23
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.52
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.62
168 TestJSONOutput/start/Command 101.91
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.67
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.62
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.23
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 99.46
200 TestMountStart/serial/StartWithMountFirst 28.42
201 TestMountStart/serial/VerifyMountFirst 0.42
202 TestMountStart/serial/StartWithMountSecond 29.99
203 TestMountStart/serial/VerifyMountSecond 0.41
204 TestMountStart/serial/DeleteFirst 0.67
205 TestMountStart/serial/VerifyMountPostDelete 0.42
206 TestMountStart/serial/Stop 2.1
207 TestMountStart/serial/RestartStopped 21.62
208 TestMountStart/serial/VerifyMountPostStop 0.42
211 TestMultiNode/serial/FreshStart2Nodes 113.77
212 TestMultiNode/serial/DeployApp2Nodes 5.3
214 TestMultiNode/serial/AddNode 40.63
215 TestMultiNode/serial/MultiNodeLabels 0.06
216 TestMultiNode/serial/ProfileList 0.23
217 TestMultiNode/serial/CopyFile 7.89
218 TestMultiNode/serial/StopNode 3
219 TestMultiNode/serial/StartAfterStop 30.12
221 TestMultiNode/serial/DeleteNode 1.85
223 TestMultiNode/serial/RestartMultiNode 441.56
224 TestMultiNode/serial/ValidateNameConflict 47.37
231 TestScheduledStopUnix 120.18
237 TestKubernetesUpgrade 161.47
239 TestStoppedBinaryUpgrade/Setup 0.34
248 TestNetworkPlugins/group/false 3.71
260 TestPause/serial/Start 108.41
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 50.37
264 TestNoKubernetes/serial/StartWithStopK8s 7.34
265 TestPause/serial/SecondStartNoReconfiguration 38.28
266 TestNoKubernetes/serial/Start 31.48
267 TestPause/serial/Pause 0.95
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
269 TestNoKubernetes/serial/ProfileList 0.8
270 TestPause/serial/VerifyStatus 0.28
271 TestPause/serial/Unpause 0.8
272 TestNoKubernetes/serial/Stop 1.31
273 TestPause/serial/PauseAgain 1.01
274 TestNoKubernetes/serial/StartNoArgs 82.25
275 TestPause/serial/DeletePaused 0.83
276 TestPause/serial/VerifyDeletedResources 0.13
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.4
278 TestNetworkPlugins/group/auto/Start 173.86
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
280 TestNetworkPlugins/group/kindnet/Start 145.02
281 TestNetworkPlugins/group/calico/Start 120.89
282 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
284 TestNetworkPlugins/group/kindnet/NetCatPod 12.25
285 TestNetworkPlugins/group/auto/KubeletFlags 0.27
286 TestNetworkPlugins/group/auto/NetCatPod 11.31
287 TestNetworkPlugins/group/kindnet/DNS 0.24
288 TestNetworkPlugins/group/kindnet/Localhost 0.17
289 TestNetworkPlugins/group/kindnet/HairPin 0.17
290 TestNetworkPlugins/group/auto/DNS 0.2
291 TestNetworkPlugins/group/auto/Localhost 0.17
292 TestNetworkPlugins/group/auto/HairPin 0.17
293 TestNetworkPlugins/group/custom-flannel/Start 89.87
294 TestNetworkPlugins/group/enable-default-cni/Start 133.37
295 TestNetworkPlugins/group/calico/ControllerPod 6.01
296 TestNetworkPlugins/group/calico/KubeletFlags 0.23
297 TestNetworkPlugins/group/calico/NetCatPod 11.25
298 TestNetworkPlugins/group/calico/DNS 0.35
299 TestNetworkPlugins/group/calico/Localhost 0.22
300 TestNetworkPlugins/group/calico/HairPin 0.18
301 TestNetworkPlugins/group/flannel/Start 103.18
302 TestNetworkPlugins/group/bridge/Start 116.98
303 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
304 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.25
305 TestNetworkPlugins/group/custom-flannel/DNS 0.24
306 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
307 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
309 TestStartStop/group/old-k8s-version/serial/FirstStart 139.22
310 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
311 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.48
312 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
313 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
314 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
315 TestNetworkPlugins/group/flannel/ControllerPod 5.02
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.61
317 TestNetworkPlugins/group/flannel/NetCatPod 13.5
319 TestStartStop/group/no-preload/serial/FirstStart 129.72
320 TestNetworkPlugins/group/flannel/DNS 0.21
321 TestNetworkPlugins/group/flannel/Localhost 0.19
322 TestNetworkPlugins/group/flannel/HairPin 0.17
324 TestStartStop/group/embed-certs/serial/FirstStart 71.96
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
326 TestNetworkPlugins/group/bridge/NetCatPod 13.3
327 TestNetworkPlugins/group/bridge/DNS 0.2
328 TestNetworkPlugins/group/bridge/Localhost 0.18
329 TestNetworkPlugins/group/bridge/HairPin 0.17
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 104.13
332 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
333 TestStartStop/group/embed-certs/serial/DeployApp 9.34
334 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.28
336 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
338 TestStartStop/group/no-preload/serial/DeployApp 8.32
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
346 TestStartStop/group/old-k8s-version/serial/SecondStart 787.64
347 TestStartStop/group/embed-certs/serial/SecondStart 895.18
349 TestStartStop/group/no-preload/serial/SecondStart 926.22
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 556.32
360 TestStartStop/group/newest-cni/serial/FirstStart 65.54
362 TestStartStop/group/newest-cni/serial/DeployApp 0
363 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.66
364 TestStartStop/group/newest-cni/serial/Stop 11.28
365 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
366 TestStartStop/group/newest-cni/serial/SecondStart 47.44
367 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
370 TestStartStop/group/newest-cni/serial/Pause 2.48
x
+
TestDownloadOnly/v1.16.0/json-events (12.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-138294 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-138294 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.596565824s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-138294
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-138294: exit status 85 (88.557038ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-138294        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:51:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:51:41.250603  407105 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:51:41.250790  407105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:41.250800  407105 out.go:309] Setting ErrFile to fd 2...
	I0108 22:51:41.250805  407105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:41.251019  407105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	W0108 22:51:41.251173  407105 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-399915/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-399915/.minikube/config/config.json: no such file or directory
	I0108 22:51:41.251920  407105 out.go:303] Setting JSON to true
	I0108 22:51:41.253180  407105 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12827,"bootTime":1704741474,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:51:41.253269  407105 start.go:138] virtualization: kvm guest
	I0108 22:51:41.256844  407105 out.go:97] [download-only-138294] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:51:41.259064  407105 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:51:41.257104  407105 notify.go:220] Checking for updates...
	W0108 22:51:41.257110  407105 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 22:51:41.262962  407105 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:51:41.264855  407105 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:51:41.266596  407105 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:51:41.268077  407105 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:51:41.271109  407105 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:51:41.271546  407105 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:51:41.313768  407105 out.go:97] Using the kvm2 driver based on user configuration
	I0108 22:51:41.313800  407105 start.go:298] selected driver: kvm2
	I0108 22:51:41.313806  407105 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:51:41.314258  407105 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:51:41.314349  407105 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:51:41.332973  407105 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:51:41.333118  407105 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:51:41.333724  407105 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0108 22:51:41.333915  407105 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 22:51:41.333996  407105 cni.go:84] Creating CNI manager for ""
	I0108 22:51:41.334008  407105 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:51:41.334019  407105 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:51:41.334029  407105 start_flags.go:323] config:
	{Name:download-only-138294 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-138294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:51:41.334404  407105 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:51:41.337001  407105 out.go:97] Downloading VM boot image ...
	I0108 22:51:41.337068  407105 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 22:51:43.713649  407105 out.go:97] Starting control plane node download-only-138294 in cluster download-only-138294
	I0108 22:51:43.713703  407105 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:51:43.743940  407105 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:43.743986  407105 cache.go:56] Caching tarball of preloaded images
	I0108 22:51:43.744180  407105 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:51:43.746512  407105 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 22:51:43.746557  407105 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:43.779255  407105 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:48.205638  407105 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:48.205734  407105 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:49.126990  407105 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0108 22:51:49.127491  407105 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/download-only-138294/config.json ...
	I0108 22:51:49.127551  407105 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/download-only-138294/config.json: {Name:mk0aef5b23ec5c0ccf7c0637cb544bbc41c173e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:51:49.127750  407105 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:51:49.127942  407105 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-138294"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-138294 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-138294 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.10747912s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-138294
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-138294: exit status 85 (85.80308ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-138294        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-138294        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:51:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:51:53.936643  407162 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:51:53.936984  407162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:53.936994  407162 out.go:309] Setting ErrFile to fd 2...
	I0108 22:51:53.936999  407162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:51:53.937262  407162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	W0108 22:51:53.937406  407162 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-399915/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-399915/.minikube/config/config.json: no such file or directory
	I0108 22:51:53.937987  407162 out.go:303] Setting JSON to true
	I0108 22:51:53.939174  407162 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12840,"bootTime":1704741474,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:51:53.939257  407162 start.go:138] virtualization: kvm guest
	I0108 22:51:53.942130  407162 out.go:97] [download-only-138294] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:51:53.943957  407162 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:51:53.942341  407162 notify.go:220] Checking for updates...
	I0108 22:51:53.947331  407162 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:51:53.949016  407162 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:51:53.950384  407162 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:51:53.951688  407162 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:51:53.954383  407162 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:51:53.954958  407162 config.go:182] Loaded profile config "download-only-138294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 22:51:53.955026  407162 start.go:810] api.Load failed for download-only-138294: filestore "download-only-138294": Docker machine "download-only-138294" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:51:53.955114  407162 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:51:53.955177  407162 start.go:810] api.Load failed for download-only-138294: filestore "download-only-138294": Docker machine "download-only-138294" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:51:53.992660  407162 out.go:97] Using the kvm2 driver based on existing profile
	I0108 22:51:53.992706  407162 start.go:298] selected driver: kvm2
	I0108 22:51:53.992716  407162 start.go:902] validating driver "kvm2" against &{Name:download-only-138294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-138294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:51:53.993215  407162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:51:53.993306  407162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:51:54.010368  407162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:51:54.011373  407162 cni.go:84] Creating CNI manager for ""
	I0108 22:51:54.011398  407162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:51:54.011421  407162 start_flags.go:323] config:
	{Name:download-only-138294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-138294 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:51:54.011628  407162 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:51:54.013800  407162 out.go:97] Starting control plane node download-only-138294 in cluster download-only-138294
	I0108 22:51:54.013825  407162 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:51:54.043682  407162 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:54.043717  407162 cache.go:56] Caching tarball of preloaded images
	I0108 22:51:54.043877  407162 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:51:54.045974  407162 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 22:51:54.046000  407162 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:54.078621  407162 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 22:51:58.170047  407162 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:51:58.170169  407162 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-138294"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (11.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-138294 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-138294 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.9033051s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (11.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-138294
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-138294: exit status 85 (87.97849ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-138294           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:51 UTC |          |
	|         | -p download-only-138294           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-138294 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |          |
	|         | -p download-only-138294           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:52:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:52:00.134010  407218 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:52:00.134343  407218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:00.134355  407218 out.go:309] Setting ErrFile to fd 2...
	I0108 22:52:00.134360  407218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:00.134578  407218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	W0108 22:52:00.134704  407218 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-399915/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-399915/.minikube/config/config.json: no such file or directory
	I0108 22:52:00.135153  407218 out.go:303] Setting JSON to true
	I0108 22:52:00.136263  407218 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12846,"bootTime":1704741474,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:52:00.136343  407218 start.go:138] virtualization: kvm guest
	I0108 22:52:00.139037  407218 out.go:97] [download-only-138294] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:52:00.140703  407218 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:52:00.139330  407218 notify.go:220] Checking for updates...
	I0108 22:52:00.143742  407218 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:52:00.145310  407218 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 22:52:00.147079  407218 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 22:52:00.148459  407218 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:52:00.151617  407218 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:52:00.152135  407218 config.go:182] Loaded profile config "download-only-138294": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:52:00.152186  407218 start.go:810] api.Load failed for download-only-138294: filestore "download-only-138294": Docker machine "download-only-138294" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:52:00.152304  407218 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:52:00.152347  407218 start.go:810] api.Load failed for download-only-138294: filestore "download-only-138294": Docker machine "download-only-138294" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:52:00.189147  407218 out.go:97] Using the kvm2 driver based on existing profile
	I0108 22:52:00.189178  407218 start.go:298] selected driver: kvm2
	I0108 22:52:00.189185  407218 start.go:902] validating driver "kvm2" against &{Name:download-only-138294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-138294 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:00.189635  407218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:00.189744  407218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-399915/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:52:00.206968  407218 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:52:00.207843  407218 cni.go:84] Creating CNI manager for ""
	I0108 22:52:00.207865  407218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 22:52:00.207878  407218 start_flags.go:323] config:
	{Name:download-only-138294 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-138294 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:00.208083  407218 iso.go:125] acquiring lock: {Name:mka4afd2d697bf9a8936aa30f9e7728f5db3cb89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:00.210174  407218 out.go:97] Starting control plane node download-only-138294 in cluster download-only-138294
	I0108 22:52:00.210190  407218 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:52:00.264479  407218 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 22:52:00.264520  407218 cache.go:56] Caching tarball of preloaded images
	I0108 22:52:00.264696  407218 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:52:00.266427  407218 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 22:52:00.266442  407218 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:52:00.306300  407218 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 22:52:04.273245  407218 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:52:04.273357  407218 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-399915/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 22:52:05.104435  407218 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 22:52:05.104610  407218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/download-only-138294/config.json ...
	I0108 22:52:05.104869  407218 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:52:05.105056  407218 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17830-399915/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-138294"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-138294
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-576323 --alsologtostderr --binary-mirror http://127.0.0.1:42563 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-576323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-576323
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (96.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-619987 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-619987 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.343944834s)
helpers_test.go:175: Cleaning up "offline-crio-619987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-619987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-619987: (1.016414361s)
--- PASS: TestOffline (96.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910124
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-910124: exit status 85 (83.028713ms)

                                                
                                                
-- stdout --
	* Profile "addons-910124" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910124"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910124
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-910124: exit status 85 (92.360702ms)

                                                
                                                
-- stdout --
	* Profile "addons-910124" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910124"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (216.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-910124 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-910124 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.283963639s)
--- PASS: TestAddons/Setup (216.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dg6l5" [ab57f458-54bf-4b04-abcd-172bd203b03e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005027593s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-910124
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-910124: (6.712848915s)
--- PASS: TestAddons/parallel/InspektorGadget (12.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 8.841403ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-fspmw" [e7812f80-df3d-4fc2-8430-9c7246f638f0] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008336719s
addons_test.go:415: (dbg) Run:  kubectl --context addons-910124 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 addons disable metrics-server --alsologtostderr -v=1: (1.165151059s)
--- PASS: TestAddons/parallel/MetricsServer (7.26s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.751551ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-w9l5g" [d00ef7bc-d0f2-4fce-9757-1a825ca34ef8] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007030407s
addons_test.go:473: (dbg) Run:  kubectl --context addons-910124 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-910124 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.735065233s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 addons disable helm-tiller --alsologtostderr -v=1: (1.250696991s)
--- PASS: TestAddons/parallel/HelmTiller (15.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (75.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 37.790647ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-910124 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-910124 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cb954aae-f170-4c74-a1a6-5770cc9fe910] Pending
helpers_test.go:344: "task-pv-pod" [cb954aae-f170-4c74-a1a6-5770cc9fe910] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cb954aae-f170-4c74-a1a6-5770cc9fe910] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.014522607s
addons_test.go:584: (dbg) Run:  kubectl --context addons-910124 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-910124 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-910124 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-910124 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-910124 delete pod task-pv-pod: (1.710576162s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-910124 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-910124 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-910124 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [023c6636-4fcb-4124-a437-a47252ac6cd5] Pending
helpers_test.go:344: "task-pv-pod-restore" [023c6636-4fcb-4124-a437-a47252ac6cd5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [023c6636-4fcb-4124-a437-a47252ac6cd5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004338664s
addons_test.go:626: (dbg) Run:  kubectl --context addons-910124 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-910124 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-910124 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.117931093s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (75.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-910124 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-910124 --alsologtostderr -v=1: (2.303447482s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-sfj86" [c595560c-8e3d-4723-8840-ad6fe139c985] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-sfj86" [c595560c-8e3d-4723-8840-ad6fe139c985] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-sfj86" [c595560c-8e3d-4723-8840-ad6fe139c985] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.014629199s
--- PASS: TestAddons/parallel/Headlamp (18.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-4b2k6" [c105ca62-3293-4681-aa1a-1a25a0f68530] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005400038s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-910124
--- PASS: TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (49.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-910124 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-910124 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3d967b49-3f59-4f13-b7cb-68da4e5e4027] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3d967b49-3f59-4f13-b7cb-68da4e5e4027] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3d967b49-3f59-4f13-b7cb-68da4e5e4027] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.007457324s
addons_test.go:891: (dbg) Run:  kubectl --context addons-910124 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 ssh "cat /opt/local-path-provisioner/pvc-5a47dfec-d168-4824-b7d6-ab2a0c18ba84_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-910124 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-910124 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-910124 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-910124 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (33.787670961s)
--- PASS: TestAddons/parallel/LocalPath (49.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-n8pqg" [22231673-96e3-48d4-a97e-9d77a615c63c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009402966s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-910124
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.72s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-d5pgh" [8df1e3cb-5981-4ca0-8178-2b3f4ef883db] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006268564s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-910124 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-910124 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (101.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-087471 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-087471 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m39.894368476s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-087471 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-087471 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-087471 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-087471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-087471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-087471: (1.034947032s)
--- PASS: TestCertOptions (101.44s)

                                                
                                    
x
+
TestCertExpiration (349.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-029474 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-029474 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (2m8.063357485s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-029474 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-029474 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.024770169s)
helpers_test.go:175: Cleaning up "cert-expiration-029474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-029474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-029474: (1.067181962s)
--- PASS: TestCertExpiration (349.16s)

                                                
                                    
x
+
TestForceSystemdFlag (69.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-922760 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-922760 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.876667411s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-922760 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-922760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-922760
--- PASS: TestForceSystemdFlag (69.10s)

                                                
                                    
x
+
TestForceSystemdEnv (69.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-013951 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-013951 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.537961014s)
helpers_test.go:175: Cleaning up "force-systemd-env-013951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-013951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-013951: (1.116769682s)
--- PASS: TestForceSystemdEnv (69.66s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                    
x
+
TestErrorSpam/setup (49.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-946165 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-946165 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-946165 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-946165 --driver=kvm2  --container-runtime=crio: (49.15590801s)
--- PASS: TestErrorSpam/setup (49.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.46s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 start --dry-run
--- PASS: TestErrorSpam/start (0.46s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 unpause
--- PASS: TestErrorSpam/unpause (2.02s)

                                                
                                    
x
+
TestErrorSpam/stop (2.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 stop: (2.122641913s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-946165 --log_dir /tmp/nospam-946165 stop
--- PASS: TestErrorSpam/stop (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17830-399915/.minikube/files/etc/test/nested/copy/407094/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (104.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483810 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-483810 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m44.566779043s)
--- PASS: TestFunctional/serial/StartWithProxy (104.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483810 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-483810 --alsologtostderr -v=8: (39.933645197s)
functional_test.go:659: soft start took 39.93448302s for "functional-483810" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-483810 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 cache add registry.k8s.io/pause:3.1: (1.245347089s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 cache add registry.k8s.io/pause:3.3: (1.23421397s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 cache add registry.k8s.io/pause:latest: (1.235429739s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-483810 /tmp/TestFunctionalserialCacheCmdcacheadd_local1948619311/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cache add minikube-local-cache-test:functional-483810
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 cache add minikube-local-cache-test:functional-483810: (1.250078411s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cache delete minikube-local-cache-test:functional-483810
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-483810
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.938432ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 cache reload: (1.12452886s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 kubectl -- --context functional-483810 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-483810 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483810 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 23:05:49.610942  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:49.617129  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:49.627554  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:49.648016  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:49.688490  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:49.768980  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:49.929481  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:50.250122  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:50.890593  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:52.171325  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:54.732110  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:05:59.853135  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-483810 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.288566107s)
functional_test.go:757: restart took 35.288697728s for "functional-483810" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-483810 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 logs: (1.816056146s)
--- PASS: TestFunctional/serial/LogsCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 logs --file /tmp/TestFunctionalserialLogsFileCmd560496299/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 logs --file /tmp/TestFunctionalserialLogsFileCmd560496299/001/logs.txt: (1.809660569s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-483810 apply -f testdata/invalidsvc.yaml
E0108 23:06:10.094248  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-483810
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-483810: exit status 115 (361.403882ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.240:30220 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-483810 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 config get cpus: exit status 14 (72.723659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 config get cpus: exit status 14 (78.780884ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-483810 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-483810 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 414969: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483810 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-483810 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (180.688453ms)

                                                
                                                
-- stdout --
	* [functional-483810] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:06:47.071871  414856 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:06:47.071990  414856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:06:47.071998  414856 out.go:309] Setting ErrFile to fd 2...
	I0108 23:06:47.072012  414856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:06:47.072240  414856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:06:47.072814  414856 out.go:303] Setting JSON to false
	I0108 23:06:47.073946  414856 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13733,"bootTime":1704741474,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:06:47.074021  414856 start.go:138] virtualization: kvm guest
	I0108 23:06:47.076935  414856 out.go:177] * [functional-483810] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:06:47.079685  414856 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:06:47.079702  414856 notify.go:220] Checking for updates...
	I0108 23:06:47.083289  414856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:06:47.085340  414856 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:06:47.087036  414856 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:06:47.088696  414856 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:06:47.090522  414856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:06:47.092607  414856 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:06:47.093119  414856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:06:47.093171  414856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:06:47.110215  414856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0108 23:06:47.110732  414856 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:06:47.111488  414856 main.go:141] libmachine: Using API Version  1
	I0108 23:06:47.111520  414856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:06:47.111939  414856 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:06:47.112233  414856 main.go:141] libmachine: (functional-483810) Calling .DriverName
	I0108 23:06:47.112577  414856 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:06:47.113048  414856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:06:47.113105  414856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:06:47.130337  414856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
	I0108 23:06:47.130789  414856 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:06:47.131322  414856 main.go:141] libmachine: Using API Version  1
	I0108 23:06:47.131348  414856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:06:47.131703  414856 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:06:47.131882  414856 main.go:141] libmachine: (functional-483810) Calling .DriverName
	I0108 23:06:47.175352  414856 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 23:06:47.176874  414856 start.go:298] selected driver: kvm2
	I0108 23:06:47.176904  414856 start.go:902] validating driver "kvm2" against &{Name:functional-483810 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-483810 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:06:47.177555  414856 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:06:47.180425  414856 out.go:177] 
	W0108 23:06:47.182148  414856 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 23:06:47.183678  414856 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483810 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483810 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-483810 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (219.319507ms)

                                                
                                                
-- stdout --
	* [functional-483810] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:06:45.713411  414738 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:06:45.713647  414738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:06:45.713662  414738 out.go:309] Setting ErrFile to fd 2...
	I0108 23:06:45.713669  414738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:06:45.714182  414738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:06:45.714968  414738 out.go:303] Setting JSON to false
	I0108 23:06:45.716413  414738 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13732,"bootTime":1704741474,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:06:45.716509  414738 start.go:138] virtualization: kvm guest
	I0108 23:06:45.719910  414738 out.go:177] * [functional-483810] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 23:06:45.721629  414738 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:06:45.721671  414738 notify.go:220] Checking for updates...
	I0108 23:06:45.724430  414738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:06:45.726543  414738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:06:45.728692  414738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:06:45.730824  414738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:06:45.733207  414738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:06:45.735698  414738 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:06:45.736245  414738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:06:45.736354  414738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:06:45.760405  414738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I0108 23:06:45.760857  414738 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:06:45.761486  414738 main.go:141] libmachine: Using API Version  1
	I0108 23:06:45.761519  414738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:06:45.761955  414738 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:06:45.762175  414738 main.go:141] libmachine: (functional-483810) Calling .DriverName
	I0108 23:06:45.762510  414738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:06:45.762956  414738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:06:45.763004  414738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:06:45.782741  414738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0108 23:06:45.783324  414738 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:06:45.783931  414738 main.go:141] libmachine: Using API Version  1
	I0108 23:06:45.783957  414738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:06:45.784394  414738 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:06:45.784622  414738 main.go:141] libmachine: (functional-483810) Calling .DriverName
	I0108 23:06:45.829199  414738 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0108 23:06:45.831706  414738 start.go:298] selected driver: kvm2
	I0108 23:06:45.831752  414738 start.go:902] validating driver "kvm2" against &{Name:functional-483810 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-483810 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.240 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:06:45.831981  414738 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:06:45.834509  414738 out.go:177] 
	W0108 23:06:45.836051  414738 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 23:06:45.837837  414738 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-483810 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-483810 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-97nzp" [6ab28666-8e07-41d9-a2f4-3257db112257] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-97nzp" [6ab28666-8e07-41d9-a2f4-3257db112257] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.055740962s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.240:31459
functional_test.go:1674: http://192.168.39.240:31459: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-97nzp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.240:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.240:31459
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [28280f76-0c0f-4885-a681-58afb9e86f87] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.02792163s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-483810 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-483810 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-483810 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-483810 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-483810 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ec9578d-875a-4038-a0e2-38ca6531bc47] Pending
helpers_test.go:344: "sp-pod" [2ec9578d-875a-4038-a0e2-38ca6531bc47] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0108 23:06:30.574808  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [2ec9578d-875a-4038-a0e2-38ca6531bc47] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.006427563s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-483810 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-483810 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-483810 delete -f testdata/storage-provisioner/pod.yaml: (1.20934103s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-483810 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c28226e5-46a1-4526-9c5a-3966becb4497] Pending
helpers_test.go:344: "sp-pod" [c28226e5-46a1-4526-9c5a-3966becb4497] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c28226e5-46a1-4526-9c5a-3966becb4497] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.009777585s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-483810 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh -n functional-483810 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cp functional-483810:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd113446180/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh -n functional-483810 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh -n functional-483810 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-483810 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-98hzw" [3280fc02-b30c-49c2-b184-39da1ec30165] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-98hzw" [3280fc02-b30c-49c2-b184-39da1ec30165] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.005322348s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;": exit status 1 (227.612332ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;": exit status 1 (285.844227ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;": exit status 1 (565.356553ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-483810 exec mysql-859648c796-98hzw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/407094/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /etc/test/nested/copy/407094/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/407094.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /etc/ssl/certs/407094.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/407094.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /usr/share/ca-certificates/407094.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/4070942.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /etc/ssl/certs/4070942.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/4070942.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /usr/share/ca-certificates/4070942.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-483810 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh "sudo systemctl is-active docker": exit status 1 (290.141592ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh "sudo systemctl is-active containerd": exit status 1 (269.524843ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 version -o=json --components: (1.107980157s)
--- PASS: TestFunctional/parallel/Version/components (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (25.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdany-port536275626/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704755176047737601" to /tmp/TestFunctionalparallelMountCmdany-port536275626/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704755176047737601" to /tmp/TestFunctionalparallelMountCmdany-port536275626/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704755176047737601" to /tmp/TestFunctionalparallelMountCmdany-port536275626/001/test-1704755176047737601
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.368118ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 23:06 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 23:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 23:06 test-1704755176047737601
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh cat /mount-9p/test-1704755176047737601
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-483810 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [534899a9-080c-4b63-9847-c7dff94f08d5] Pending
helpers_test.go:344: "busybox-mount" [534899a9-080c-4b63-9847-c7dff94f08d5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [534899a9-080c-4b63-9847-c7dff94f08d5] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [534899a9-080c-4b63-9847-c7dff94f08d5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.005885437s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-483810 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdany-port536275626/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (25.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdspecific-port615984779/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.983261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdspecific-port615984779/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh "sudo umount -f /mount-9p": exit status 1 (292.30993ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-483810 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdspecific-port615984779/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-483810 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-483810 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kzpcr" [7f72a108-2aa5-43c2-8737-632983174fb6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kzpcr" [7f72a108-2aa5-43c2-8737-632983174fb6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.013064306s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735995869/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735995869/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735995869/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T" /mount1: exit status 1 (366.6439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-483810 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735995869/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735995869/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483810 /tmp/TestFunctionalparallelMountCmdVerifyCleanup735995869/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "493.481929ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "78.287752ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "331.476654ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "68.686875ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483810 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-483810
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-483810
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483810 image ls --format short --alsologtostderr:
I0108 23:07:07.785969  415616 out.go:296] Setting OutFile to fd 1 ...
I0108 23:07:07.786254  415616 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:07.786266  415616 out.go:309] Setting ErrFile to fd 2...
I0108 23:07:07.786270  415616 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:07.786482  415616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
I0108 23:07:07.787103  415616 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:07.787221  415616 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:07.787646  415616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:07.787704  415616 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:07.806029  415616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32903
I0108 23:07:07.806531  415616 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:07.807130  415616 main.go:141] libmachine: Using API Version  1
I0108 23:07:07.807159  415616 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:07.807601  415616 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:07.807774  415616 main.go:141] libmachine: (functional-483810) Calling .GetState
I0108 23:07:07.809626  415616 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:07.809668  415616 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:07.829790  415616 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
I0108 23:07:07.830493  415616 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:07.831063  415616 main.go:141] libmachine: Using API Version  1
I0108 23:07:07.831082  415616 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:07.831458  415616 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:07.831706  415616 main.go:141] libmachine: (functional-483810) Calling .DriverName
I0108 23:07:07.831967  415616 ssh_runner.go:195] Run: systemctl --version
I0108 23:07:07.831996  415616 main.go:141] libmachine: (functional-483810) Calling .GetSSHHostname
I0108 23:07:07.838370  415616 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:07.838849  415616 main.go:141] libmachine: (functional-483810) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:8b:cf", ip: ""} in network mk-functional-483810: {Iface:virbr1 ExpiryTime:2024-01-09 00:03:14 +0000 UTC Type:0 Mac:52:54:00:f4:8b:cf Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-483810 Clientid:01:52:54:00:f4:8b:cf}
I0108 23:07:07.838868  415616 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined IP address 192.168.39.240 and MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:07.839209  415616 main.go:141] libmachine: (functional-483810) Calling .GetSSHPort
I0108 23:07:07.839446  415616 main.go:141] libmachine: (functional-483810) Calling .GetSSHKeyPath
I0108 23:07:07.839606  415616 main.go:141] libmachine: (functional-483810) Calling .GetSSHUsername
I0108 23:07:07.839774  415616 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/functional-483810/id_rsa Username:docker}
I0108 23:07:07.936976  415616 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:07:08.001142  415616 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.001160  415616 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.001486  415616 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
I0108 23:07:08.001536  415616 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.001576  415616 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:07:08.001586  415616 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.001596  415616 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.001951  415616 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
I0108 23:07:08.002005  415616 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.002022  415616 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483810 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| localhost/minikube-local-cache-test     | functional-483810  | c41e235dc270f | 3.35kB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-483810  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483810 image ls --format table --alsologtostderr:
I0108 23:07:08.106525  415712 out.go:296] Setting OutFile to fd 1 ...
I0108 23:07:08.106672  415712 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:08.106685  415712 out.go:309] Setting ErrFile to fd 2...
I0108 23:07:08.106693  415712 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:08.107071  415712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
I0108 23:07:08.107914  415712 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:08.108071  415712 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:08.108640  415712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:08.108711  415712 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:08.127277  415712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
I0108 23:07:08.127909  415712 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:08.128841  415712 main.go:141] libmachine: Using API Version  1
I0108 23:07:08.128874  415712 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:08.129302  415712 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:08.129578  415712 main.go:141] libmachine: (functional-483810) Calling .GetState
I0108 23:07:08.131869  415712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:08.131936  415712 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:08.153888  415712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
I0108 23:07:08.154321  415712 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:08.154767  415712 main.go:141] libmachine: Using API Version  1
I0108 23:07:08.154788  415712 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:08.155108  415712 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:08.155307  415712 main.go:141] libmachine: (functional-483810) Calling .DriverName
I0108 23:07:08.155553  415712 ssh_runner.go:195] Run: systemctl --version
I0108 23:07:08.155588  415712 main.go:141] libmachine: (functional-483810) Calling .GetSSHHostname
I0108 23:07:08.158497  415712 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:08.159143  415712 main.go:141] libmachine: (functional-483810) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:8b:cf", ip: ""} in network mk-functional-483810: {Iface:virbr1 ExpiryTime:2024-01-09 00:03:14 +0000 UTC Type:0 Mac:52:54:00:f4:8b:cf Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-483810 Clientid:01:52:54:00:f4:8b:cf}
I0108 23:07:08.159212  415712 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined IP address 192.168.39.240 and MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:08.159425  415712 main.go:141] libmachine: (functional-483810) Calling .GetSSHPort
I0108 23:07:08.159697  415712 main.go:141] libmachine: (functional-483810) Calling .GetSSHKeyPath
I0108 23:07:08.159909  415712 main.go:141] libmachine: (functional-483810) Calling .GetSSHUsername
I0108 23:07:08.160109  415712 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/functional-483810/id_rsa Username:docker}
I0108 23:07:08.261141  415712 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:07:08.357234  415712 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.357259  415712 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.357628  415712 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.357656  415712 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:07:08.357678  415712 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.357688  415712 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.357938  415712 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.357959  415712 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:07:08.357976  415712 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483810 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDig
ests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-483810"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests"
:["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c41e235dc270fafe3faab9c7b50bfd930f0a02b778298bc21a07406ed5bb24ae","repoDigests":["localhost/minikube-local-cache-test@sha256:5f7fbfc7c4eb246a1ed464312ec81522abccf54ba8c565725cadc1fa02421028"],"repoTags":["localhost/minikube-local-cache-test:functional-483810"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha
256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0
da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f9
2e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube
-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483810 image ls --format json --alsologtostderr:
I0108 23:07:08.092170  415702 out.go:296] Setting OutFile to fd 1 ...
I0108 23:07:08.092326  415702 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:08.092339  415702 out.go:309] Setting ErrFile to fd 2...
I0108 23:07:08.092346  415702 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:08.092622  415702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
I0108 23:07:08.093467  415702 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:08.093602  415702 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:08.094072  415702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:08.094153  415702 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:08.113103  415702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43123
I0108 23:07:08.113833  415702 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:08.114498  415702 main.go:141] libmachine: Using API Version  1
I0108 23:07:08.114517  415702 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:08.114919  415702 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:08.115160  415702 main.go:141] libmachine: (functional-483810) Calling .GetState
I0108 23:07:08.117889  415702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:08.117964  415702 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:08.134428  415702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
I0108 23:07:08.134884  415702 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:08.135435  415702 main.go:141] libmachine: Using API Version  1
I0108 23:07:08.135461  415702 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:08.135845  415702 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:08.136070  415702 main.go:141] libmachine: (functional-483810) Calling .DriverName
I0108 23:07:08.136276  415702 ssh_runner.go:195] Run: systemctl --version
I0108 23:07:08.136306  415702 main.go:141] libmachine: (functional-483810) Calling .GetSSHHostname
I0108 23:07:08.139405  415702 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:08.139882  415702 main.go:141] libmachine: (functional-483810) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:8b:cf", ip: ""} in network mk-functional-483810: {Iface:virbr1 ExpiryTime:2024-01-09 00:03:14 +0000 UTC Type:0 Mac:52:54:00:f4:8b:cf Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-483810 Clientid:01:52:54:00:f4:8b:cf}
I0108 23:07:08.139903  415702 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined IP address 192.168.39.240 and MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:08.140140  415702 main.go:141] libmachine: (functional-483810) Calling .GetSSHPort
I0108 23:07:08.140390  415702 main.go:141] libmachine: (functional-483810) Calling .GetSSHKeyPath
I0108 23:07:08.140561  415702 main.go:141] libmachine: (functional-483810) Calling .GetSSHUsername
I0108 23:07:08.140743  415702 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/functional-483810/id_rsa Username:docker}
I0108 23:07:08.235330  415702 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:07:08.313736  415702 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.313757  415702 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.314149  415702 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.314170  415702 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:07:08.314202  415702 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.314217  415702 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.314537  415702 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
I0108 23:07:08.314616  415702 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.314630  415702 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483810 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c41e235dc270fafe3faab9c7b50bfd930f0a02b778298bc21a07406ed5bb24ae
repoDigests:
- localhost/minikube-local-cache-test@sha256:5f7fbfc7c4eb246a1ed464312ec81522abccf54ba8c565725cadc1fa02421028
repoTags:
- localhost/minikube-local-cache-test:functional-483810
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-483810
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483810 image ls --format yaml --alsologtostderr:
I0108 23:07:07.780633  415618 out.go:296] Setting OutFile to fd 1 ...
I0108 23:07:07.780945  415618 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:07.780957  415618 out.go:309] Setting ErrFile to fd 2...
I0108 23:07:07.780967  415618 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:07.781267  415618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
I0108 23:07:07.782039  415618 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:07.782181  415618 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:07.782703  415618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:07.782768  415618 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:07.799999  415618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
I0108 23:07:07.800593  415618 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:07.801314  415618 main.go:141] libmachine: Using API Version  1
I0108 23:07:07.801353  415618 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:07.801805  415618 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:07.802001  415618 main.go:141] libmachine: (functional-483810) Calling .GetState
I0108 23:07:07.804098  415618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:07.804137  415618 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:07.823534  415618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
I0108 23:07:07.824148  415618 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:07.824717  415618 main.go:141] libmachine: Using API Version  1
I0108 23:07:07.824745  415618 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:07.828113  415618 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:07.828522  415618 main.go:141] libmachine: (functional-483810) Calling .DriverName
I0108 23:07:07.828800  415618 ssh_runner.go:195] Run: systemctl --version
I0108 23:07:07.828838  415618 main.go:141] libmachine: (functional-483810) Calling .GetSSHHostname
I0108 23:07:07.832707  415618 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:07.833507  415618 main.go:141] libmachine: (functional-483810) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:8b:cf", ip: ""} in network mk-functional-483810: {Iface:virbr1 ExpiryTime:2024-01-09 00:03:14 +0000 UTC Type:0 Mac:52:54:00:f4:8b:cf Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-483810 Clientid:01:52:54:00:f4:8b:cf}
I0108 23:07:07.833694  415618 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined IP address 192.168.39.240 and MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:07.834203  415618 main.go:141] libmachine: (functional-483810) Calling .GetSSHPort
I0108 23:07:07.834570  415618 main.go:141] libmachine: (functional-483810) Calling .GetSSHKeyPath
I0108 23:07:07.834826  415618 main.go:141] libmachine: (functional-483810) Calling .GetSSHUsername
I0108 23:07:07.834968  415618 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/functional-483810/id_rsa Username:docker}
I0108 23:07:07.937417  415618 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:07:08.017856  415618 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.017874  415618 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.018198  415618 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.018211  415618 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
I0108 23:07:08.018224  415618 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:07:08.018244  415618 main.go:141] libmachine: Making call to close driver server
I0108 23:07:08.018255  415618 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:08.018522  415618 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:08.018537  415618 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483810 ssh pgrep buildkitd: exit status 1 (270.968936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image build -t localhost/my-image:functional-483810 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image build -t localhost/my-image:functional-483810 testdata/build --alsologtostderr: (2.57168931s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483810 image build -t localhost/my-image:functional-483810 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b69a42e261d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-483810
--> 3dd7f2aae6e
Successfully tagged localhost/my-image:functional-483810
3dd7f2aae6e78afab4c85e637820968cabcccc793812cdfe5e542e269a754f7e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483810 image build -t localhost/my-image:functional-483810 testdata/build --alsologtostderr:
I0108 23:07:08.041895  415691 out.go:296] Setting OutFile to fd 1 ...
I0108 23:07:08.042057  415691 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:08.042066  415691 out.go:309] Setting ErrFile to fd 2...
I0108 23:07:08.042071  415691 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:07:08.042280  415691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
I0108 23:07:08.043066  415691 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:08.044035  415691 config.go:182] Loaded profile config "functional-483810": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 23:07:08.044776  415691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:08.044871  415691 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:08.071181  415691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37251
I0108 23:07:08.073098  415691 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:08.074039  415691 main.go:141] libmachine: Using API Version  1
I0108 23:07:08.074076  415691 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:08.074710  415691 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:08.074980  415691 main.go:141] libmachine: (functional-483810) Calling .GetState
I0108 23:07:08.077489  415691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 23:07:08.077553  415691 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:07:08.101497  415691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
I0108 23:07:08.102315  415691 main.go:141] libmachine: () Calling .GetVersion
I0108 23:07:08.103033  415691 main.go:141] libmachine: Using API Version  1
I0108 23:07:08.103068  415691 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:07:08.103613  415691 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:07:08.103917  415691 main.go:141] libmachine: (functional-483810) Calling .DriverName
I0108 23:07:08.104195  415691 ssh_runner.go:195] Run: systemctl --version
I0108 23:07:08.104233  415691 main.go:141] libmachine: (functional-483810) Calling .GetSSHHostname
I0108 23:07:08.107942  415691 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:08.108479  415691 main.go:141] libmachine: (functional-483810) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:8b:cf", ip: ""} in network mk-functional-483810: {Iface:virbr1 ExpiryTime:2024-01-09 00:03:14 +0000 UTC Type:0 Mac:52:54:00:f4:8b:cf Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-483810 Clientid:01:52:54:00:f4:8b:cf}
I0108 23:07:08.108523  415691 main.go:141] libmachine: (functional-483810) DBG | domain functional-483810 has defined IP address 192.168.39.240 and MAC address 52:54:00:f4:8b:cf in network mk-functional-483810
I0108 23:07:08.108695  415691 main.go:141] libmachine: (functional-483810) Calling .GetSSHPort
I0108 23:07:08.108952  415691 main.go:141] libmachine: (functional-483810) Calling .GetSSHKeyPath
I0108 23:07:08.109256  415691 main.go:141] libmachine: (functional-483810) Calling .GetSSHUsername
I0108 23:07:08.109415  415691 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/functional-483810/id_rsa Username:docker}
I0108 23:07:08.212479  415691 build_images.go:151] Building image from path: /tmp/build.750163398.tar
I0108 23:07:08.212566  415691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 23:07:08.223611  415691 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.750163398.tar
I0108 23:07:08.230353  415691 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.750163398.tar: stat -c "%s %y" /var/lib/minikube/build/build.750163398.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.750163398.tar': No such file or directory
I0108 23:07:08.230397  415691 ssh_runner.go:362] scp /tmp/build.750163398.tar --> /var/lib/minikube/build/build.750163398.tar (3072 bytes)
I0108 23:07:08.272163  415691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.750163398
I0108 23:07:08.295516  415691 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.750163398 -xf /var/lib/minikube/build/build.750163398.tar
I0108 23:07:08.320103  415691 crio.go:297] Building image: /var/lib/minikube/build/build.750163398
I0108 23:07:08.320178  415691 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-483810 /var/lib/minikube/build/build.750163398 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0108 23:07:10.509369  415691 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-483810 /var/lib/minikube/build/build.750163398 --cgroup-manager=cgroupfs: (2.189110935s)
I0108 23:07:10.509487  415691 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.750163398
I0108 23:07:10.522566  415691 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.750163398.tar
I0108 23:07:10.535084  415691 build_images.go:207] Built localhost/my-image:functional-483810 from /tmp/build.750163398.tar
I0108 23:07:10.535134  415691 build_images.go:123] succeeded building to: functional-483810
I0108 23:07:10.535145  415691 build_images.go:124] failed building to: 
I0108 23:07:10.535180  415691 main.go:141] libmachine: Making call to close driver server
I0108 23:07:10.535199  415691 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:10.535545  415691 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:10.535569  415691 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:07:10.535580  415691 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
I0108 23:07:10.535585  415691 main.go:141] libmachine: Making call to close driver server
I0108 23:07:10.535600  415691 main.go:141] libmachine: (functional-483810) Calling .Close
I0108 23:07:10.535905  415691 main.go:141] libmachine: (functional-483810) DBG | Closing plugin on server side
I0108 23:07:10.535954  415691 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:07:10.535968  415691 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.067695146s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-483810
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image load --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image load --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr: (4.725398307s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image load --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image load --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr: (2.573462988s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 service list: (1.335398975s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-483810
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image load --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image load --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr: (7.14434552s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 service list -o json: (1.512827785s)
functional_test.go:1493: Took "1.513019807s" to run "out/minikube-linux-amd64 -p functional-483810 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.240:32551
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.240:32551
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image save gcr.io/google-containers/addon-resizer:functional-483810 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
2024/01/08 23:07:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image save gcr.io/google-containers/addon-resizer:functional-483810 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.066935658s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image rm gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.367938853s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-483810
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-483810 image save --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-483810 image save --daemon gcr.io/google-containers/addon-resizer:functional-483810 --alsologtostderr: (1.241680877s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-483810
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-483810
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-483810
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-483810
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (109.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-132808 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0108 23:08:33.456607  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-132808 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m49.231038086s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (109.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons enable ingress --alsologtostderr -v=5: (17.517127565s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-132808 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-901241 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0108 23:12:35.598967  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:13:57.522467  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-901241 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.908752523s)
--- PASS: TestJSONOutput/start/Command (101.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-901241 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-901241 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-901241 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-901241 --output=json --user=testUser: (7.113326527s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-882191 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-882191 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.068349ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0fc34856-6530-4b2f-8bf6-17fde994ba58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-882191] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fea99a13-f8fa-4a52-9960-ecf42ef634c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"927b757d-951a-402b-9f08-f247377831b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9a9154d7-d153-4457-974a-58d990ec316c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig"}}
	{"specversion":"1.0","id":"d9aebe80-a017-465e-9918-b05c9f173650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube"}}
	{"specversion":"1.0","id":"da1a69ba-2e24-4117-9bb3-84738dcb2733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"495d7f25-373d-4beb-9991-e118fbc229ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d2095347-85d8-49be-89f4-662ca01490c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-882191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-882191
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (99.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-545949 --driver=kvm2  --container-runtime=crio
E0108 23:14:19.627515  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:19.632641  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:19.643068  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:19.663384  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:19.703752  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:19.784162  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:19.944663  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:20.265344  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:20.906329  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:22.186959  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:24.747929  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:29.869085  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:14:40.109281  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-545949 --driver=kvm2  --container-runtime=crio: (46.724787476s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-548453 --driver=kvm2  --container-runtime=crio
E0108 23:15:00.589793  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:15:41.550920  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-548453 --driver=kvm2  --container-runtime=crio: (49.827253821s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-545949
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-548453
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-548453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-548453
helpers_test.go:175: Cleaning up "first-545949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-545949
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-545949: (1.00159105s)
--- PASS: TestMinikubeProfile (99.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-152567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0108 23:15:49.610217  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:16:13.678223  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-152567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.421046795s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-152567 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-152567 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-169147 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0108 23:16:41.362878  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-169147 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.994223695s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169147 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169147 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-152567 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169147 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169147 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-169147
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-169147: (2.099302315s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-169147
E0108 23:17:03.472997  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-169147: (20.622747584s)
--- PASS: TestMountStart/serial/RestartStopped (21.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169147 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-169147 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266395 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266395 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.322665396s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-266395 -- rollout status deployment/busybox: (3.374793108s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-nl6pn -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-wz22p -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-nl6pn -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-wz22p -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-nl6pn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-266395 -- exec busybox-5bc68d56bd-wz22p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-266395 -v 3 --alsologtostderr
E0108 23:19:19.627305  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:19:47.313575  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-266395 -v 3 --alsologtostderr: (40.031646783s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-266395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp testdata/cp-test.txt multinode-266395:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3286421314/001/cp-test_multinode-266395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395:/home/docker/cp-test.txt multinode-266395-m02:/home/docker/cp-test_multinode-266395_multinode-266395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m02 "sudo cat /home/docker/cp-test_multinode-266395_multinode-266395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395:/home/docker/cp-test.txt multinode-266395-m03:/home/docker/cp-test_multinode-266395_multinode-266395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m03 "sudo cat /home/docker/cp-test_multinode-266395_multinode-266395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp testdata/cp-test.txt multinode-266395-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3286421314/001/cp-test_multinode-266395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395-m02:/home/docker/cp-test.txt multinode-266395:/home/docker/cp-test_multinode-266395-m02_multinode-266395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395 "sudo cat /home/docker/cp-test_multinode-266395-m02_multinode-266395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395-m02:/home/docker/cp-test.txt multinode-266395-m03:/home/docker/cp-test_multinode-266395-m02_multinode-266395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m03 "sudo cat /home/docker/cp-test_multinode-266395-m02_multinode-266395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp testdata/cp-test.txt multinode-266395-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3286421314/001/cp-test_multinode-266395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt multinode-266395:/home/docker/cp-test_multinode-266395-m03_multinode-266395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395 "sudo cat /home/docker/cp-test_multinode-266395-m03_multinode-266395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 cp multinode-266395-m03:/home/docker/cp-test.txt multinode-266395-m02:/home/docker/cp-test_multinode-266395-m03_multinode-266395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 ssh -n multinode-266395-m02 "sudo cat /home/docker/cp-test_multinode-266395-m03_multinode-266395-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-266395 node stop m03: (2.100062783s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266395 status: exit status 7 (447.813797ms)

                                                
                                                
-- stdout --
	multinode-266395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-266395-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-266395-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-266395 status --alsologtostderr: exit status 7 (454.74431ms)

                                                
                                                
-- stdout --
	multinode-266395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-266395-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-266395-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:20:07.365461  423123 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:20:07.365570  423123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:20:07.365582  423123 out.go:309] Setting ErrFile to fd 2...
	I0108 23:20:07.365586  423123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:20:07.365760  423123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:20:07.365927  423123 out.go:303] Setting JSON to false
	I0108 23:20:07.365965  423123 mustload.go:65] Loading cluster: multinode-266395
	I0108 23:20:07.366093  423123 notify.go:220] Checking for updates...
	I0108 23:20:07.366393  423123 config.go:182] Loaded profile config "multinode-266395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:20:07.366407  423123 status.go:255] checking status of multinode-266395 ...
	I0108 23:20:07.366821  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.366884  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.385499  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I0108 23:20:07.385905  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.386495  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.386518  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.386897  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.387087  423123 main.go:141] libmachine: (multinode-266395) Calling .GetState
	I0108 23:20:07.388703  423123 status.go:330] multinode-266395 host status = "Running" (err=<nil>)
	I0108 23:20:07.388725  423123 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:20:07.389142  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.389214  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.404007  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0108 23:20:07.404442  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.404871  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.404896  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.405192  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.405366  423123 main.go:141] libmachine: (multinode-266395) Calling .GetIP
	I0108 23:20:07.407994  423123 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:20:07.408340  423123 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:20:07.408378  423123 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:20:07.408485  423123 host.go:66] Checking if "multinode-266395" exists ...
	I0108 23:20:07.408835  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.408878  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.423432  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0108 23:20:07.423801  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.424233  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.424255  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.424573  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.424765  423123 main.go:141] libmachine: (multinode-266395) Calling .DriverName
	I0108 23:20:07.424936  423123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:20:07.424973  423123 main.go:141] libmachine: (multinode-266395) Calling .GetSSHHostname
	I0108 23:20:07.427861  423123 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:20:07.428314  423123 main.go:141] libmachine: (multinode-266395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:1d:b6", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:17:29 +0000 UTC Type:0 Mac:52:54:00:64:1d:b6 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-266395 Clientid:01:52:54:00:64:1d:b6}
	I0108 23:20:07.428348  423123 main.go:141] libmachine: (multinode-266395) DBG | domain multinode-266395 has defined IP address 192.168.39.18 and MAC address 52:54:00:64:1d:b6 in network mk-multinode-266395
	I0108 23:20:07.428444  423123 main.go:141] libmachine: (multinode-266395) Calling .GetSSHPort
	I0108 23:20:07.428614  423123 main.go:141] libmachine: (multinode-266395) Calling .GetSSHKeyPath
	I0108 23:20:07.428768  423123 main.go:141] libmachine: (multinode-266395) Calling .GetSSHUsername
	I0108 23:20:07.428919  423123 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395/id_rsa Username:docker}
	I0108 23:20:07.518964  423123 ssh_runner.go:195] Run: systemctl --version
	I0108 23:20:07.524606  423123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:20:07.540319  423123 kubeconfig.go:92] found "multinode-266395" server: "https://192.168.39.18:8443"
	I0108 23:20:07.540356  423123 api_server.go:166] Checking apiserver status ...
	I0108 23:20:07.540400  423123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:20:07.553140  423123 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	I0108 23:20:07.562788  423123 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod693c20f812d77c22a17dccfbf3ed1fb9/crio-c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9"
	I0108 23:20:07.562857  423123 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod693c20f812d77c22a17dccfbf3ed1fb9/crio-c155a8bd6659d8ec35f345d133112ca497a309f98042c6be7cc3382b139650a9/freezer.state
	I0108 23:20:07.573115  423123 api_server.go:204] freezer state: "THAWED"
	I0108 23:20:07.573142  423123 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0108 23:20:07.578320  423123 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0108 23:20:07.578341  423123 status.go:421] multinode-266395 apiserver status = Running (err=<nil>)
	I0108 23:20:07.578351  423123 status.go:257] multinode-266395 status: &{Name:multinode-266395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:20:07.578367  423123 status.go:255] checking status of multinode-266395-m02 ...
	I0108 23:20:07.578650  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.578700  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.593566  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0108 23:20:07.593948  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.594407  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.594423  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.594716  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.594910  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .GetState
	I0108 23:20:07.596638  423123 status.go:330] multinode-266395-m02 host status = "Running" (err=<nil>)
	I0108 23:20:07.596658  423123 host.go:66] Checking if "multinode-266395-m02" exists ...
	I0108 23:20:07.596939  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.596971  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.612413  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
	I0108 23:20:07.612843  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.613315  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.613340  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.613725  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.613914  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .GetIP
	I0108 23:20:07.617143  423123 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:20:07.617531  423123 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:20:07.617561  423123 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:20:07.617727  423123 host.go:66] Checking if "multinode-266395-m02" exists ...
	I0108 23:20:07.618072  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.618118  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.632677  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39027
	I0108 23:20:07.633075  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.633533  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.633559  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.633944  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.634128  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .DriverName
	I0108 23:20:07.634319  423123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:20:07.634342  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHHostname
	I0108 23:20:07.636955  423123 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:20:07.637283  423123 main.go:141] libmachine: (multinode-266395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9d:f1", ip: ""} in network mk-multinode-266395: {Iface:virbr1 ExpiryTime:2024-01-09 00:18:37 +0000 UTC Type:0 Mac:52:54:00:ec:9d:f1 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-266395-m02 Clientid:01:52:54:00:ec:9d:f1}
	I0108 23:20:07.637326  423123 main.go:141] libmachine: (multinode-266395-m02) DBG | domain multinode-266395-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:ec:9d:f1 in network mk-multinode-266395
	I0108 23:20:07.637456  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHPort
	I0108 23:20:07.637620  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHKeyPath
	I0108 23:20:07.637742  423123 main.go:141] libmachine: (multinode-266395-m02) Calling .GetSSHUsername
	I0108 23:20:07.637867  423123 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-399915/.minikube/machines/multinode-266395-m02/id_rsa Username:docker}
	I0108 23:20:07.726377  423123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:20:07.739150  423123 status.go:257] multinode-266395-m02 status: &{Name:multinode-266395-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:20:07.739190  423123 status.go:255] checking status of multinode-266395-m03 ...
	I0108 23:20:07.739565  423123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 23:20:07.739614  423123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:20:07.755726  423123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0108 23:20:07.756216  423123 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:20:07.756700  423123 main.go:141] libmachine: Using API Version  1
	I0108 23:20:07.756722  423123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:20:07.757043  423123 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:20:07.757224  423123 main.go:141] libmachine: (multinode-266395-m03) Calling .GetState
	I0108 23:20:07.758629  423123 status.go:330] multinode-266395-m03 host status = "Stopped" (err=<nil>)
	I0108 23:20:07.758662  423123 status.go:343] host is not running, skipping remaining checks
	I0108 23:20:07.758670  423123 status.go:257] multinode-266395-m03 status: &{Name:multinode-266395-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-266395 node start m03 --alsologtostderr: (29.457383899s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-266395 node delete m03: (1.250687033s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (441.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266395 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0108 23:35:49.610793  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:36:13.677317  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0108 23:38:52.660475  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:39:19.627835  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0108 23:40:49.610801  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:41:13.678153  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266395 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m21.001317655s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-266395 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (441.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-266395
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266395-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-266395-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.363369ms)

                                                
                                                
-- stdout --
	* [multinode-266395-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-266395-m02' is duplicated with machine name 'multinode-266395-m02' in profile 'multinode-266395'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-266395-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-266395-m03 --driver=kvm2  --container-runtime=crio: (45.993173511s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-266395
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-266395: exit status 80 (245.589013ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-266395
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-266395-m03 already exists in multinode-266395-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-266395-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.37s)

                                                
                                    
x
+
TestScheduledStopUnix (120.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-561489 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-561489 --memory=2048 --driver=kvm2  --container-runtime=crio: (48.304204892s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-561489 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-561489 -n scheduled-stop-561489
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-561489 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 23:47:22.675164  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-561489 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-561489 -n scheduled-stop-561489
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-561489
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-561489 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-561489
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-561489: exit status 7 (82.350829ms)

                                                
                                                
-- stdout --
	scheduled-stop-561489
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-561489 -n scheduled-stop-561489
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-561489 -n scheduled-stop-561489: exit status 7 (78.494316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-561489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-561489
--- PASS: TestScheduledStopUnix (120.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (161.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.712214703s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-638401
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-638401: (2.209675428s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-638401 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-638401 status --format={{.Host}}: exit status 7 (111.934267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.87297468s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-638401 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (105.947594ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-638401] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-638401
	    minikube start -p kubernetes-upgrade-638401 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6384012 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-638401 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0108 23:50:49.610739  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-638401 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.282168114s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-638401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-638401
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-638401: (1.113102317s)
--- PASS: TestKubernetesUpgrade (161.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-976891 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-976891 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (130.38956ms)

                                                
                                                
-- stdout --
	* [false-976891] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:48:38.216670  431294 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:48:38.216937  431294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:48:38.216948  431294 out.go:309] Setting ErrFile to fd 2...
	I0108 23:48:38.216956  431294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:48:38.217172  431294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-399915/.minikube/bin
	I0108 23:48:38.217824  431294 out.go:303] Setting JSON to false
	I0108 23:48:38.218889  431294 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16244,"bootTime":1704741474,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:48:38.218976  431294 start.go:138] virtualization: kvm guest
	I0108 23:48:38.221696  431294 out.go:177] * [false-976891] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:48:38.223094  431294 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:48:38.223125  431294 notify.go:220] Checking for updates...
	I0108 23:48:38.224535  431294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:48:38.225956  431294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	I0108 23:48:38.227288  431294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	I0108 23:48:38.228578  431294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:48:38.229889  431294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:48:38.231876  431294 config.go:182] Loaded profile config "kubernetes-upgrade-638401": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 23:48:38.232026  431294 config.go:182] Loaded profile config "offline-crio-619987": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:48:38.232102  431294 config.go:182] Loaded profile config "stopped-upgrade-621247": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 23:48:38.232195  431294 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:48:38.269635  431294 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 23:48:38.271206  431294 start.go:298] selected driver: kvm2
	I0108 23:48:38.271223  431294 start.go:902] validating driver "kvm2" against <nil>
	I0108 23:48:38.271235  431294 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:48:38.273198  431294 out.go:177] 
	W0108 23:48:38.274577  431294 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 23:48:38.275852  431294 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-976891 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-976891" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-976891

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-976891"

                                                
                                                
----------------------- debugLogs end: false-976891 [took: 3.420470358s] --------------------------------
helpers_test.go:175: Cleaning up "false-976891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-976891
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestPause/serial/Start (108.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-632250 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-632250 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.404896857s)
--- PASS: TestPause/serial/Start (108.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570869 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-570869 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (93.104593ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-570869] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-399915/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-399915/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570869 --driver=kvm2  --container-runtime=crio
E0108 23:51:13.676896  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570869 --driver=kvm2  --container-runtime=crio: (50.076017147s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-570869 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570869 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570869 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.988740572s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-570869 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-570869 status -o json: exit status 2 (267.424575ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-570869","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-570869
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-570869: (1.079684666s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-632250 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-632250 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.260140698s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570869 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570869 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.480701921s)
--- PASS: TestNoKubernetes/serial/Start (31.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-632250 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-570869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-570869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (232.16703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-632250 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-632250 --output=json --layout=cluster: exit status 2 (280.891069ms)

                                                
                                                
-- stdout --
	{"Name":"pause-632250","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-632250","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-632250 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-570869
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-570869: (1.314036959s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-632250 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-632250 --alsologtostderr -v=5: (1.005337038s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (82.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-570869 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-570869 --driver=kvm2  --container-runtime=crio: (1m22.248204757s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (82.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-632250 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-621247
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (173.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m53.861488344s)
--- PASS: TestNetworkPlugins/group/auto/Start (173.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-570869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-570869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.983045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (145.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0108 23:54:19.627908  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m25.015844338s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (145.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (120.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0108 23:55:32.661181  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:55:49.610737  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0108 23:56:13.676620  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m0.886681092s)
--- PASS: TestNetworkPlugins/group/calico/Start (120.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rf54h" [b62c3e15-91ea-49d4-b884-9614d62f2692] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005189657s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w7wbb" [9fc28f59-c131-4838-a03c-aab4d6582b40] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w7wbb" [9fc28f59-c131-4838-a03c-aab4d6582b40] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.008055555s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mpvvg" [6f473de3-6817-4fbc-8b2d-1a4b895620a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mpvvg" [6f473de3-6817-4fbc-8b2d-1a4b895620a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005606756s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m29.866144363s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (133.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m13.373322178s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (133.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hnxfk" [002332b1-1b23-4413-a932-64580ec61ce7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006498557s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lb2bj" [f5783c83-6070-4349-8a4a-f8fff0d1f0c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lb2bj" [f5783c83-6070-4349-8a4a-f8fff0d1f0c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006395333s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (103.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m43.179721944s)
--- PASS: TestNetworkPlugins/group/flannel/Start (103.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (116.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-976891 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m56.982978529s)
--- PASS: TestNetworkPlugins/group/bridge/Start (116.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lbbrp" [e58e1277-84c5-417d-b600-fff04dd7dc81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lbbrp" [e58e1277-84c5-417d-b600-fff04dd7dc81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005372451s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-003293 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0108 23:59:19.627969  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-003293 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m19.223038576s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-th87f" [100f4125-305a-48d1-9757-26d6ae43be39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-th87f" [100f4125-305a-48d1-9757-26d6ae43be39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004182445s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-th258" [1c6f250b-be39-447c-ac8e-bea04fd499ce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015513468s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ssmbm" [fe0d3678-96be-4d2c-a736-b60215861018] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ssmbm" [fe0d3678-96be-4d2c-a736-b60215861018] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004174912s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (129.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-378213 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-378213 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m9.721562207s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (129.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-845373 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-845373 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m11.958943244s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-976891 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-976891 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2wxps" [9892d90e-6ef1-49b9-adc1-91027f59b57c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2wxps" [9892d90e-6ef1-49b9-adc1-91027f59b57c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004211571s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-976891 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-976891 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-834116 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 00:01:13.676618  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-834116 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m44.134263941s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (104.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003293 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3fa12d2f-a26c-4bdb-9ea8-7ef918e4897f] Pending
helpers_test.go:344: "busybox" [3fa12d2f-a26c-4bdb-9ea8-7ef918e4897f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0109 00:01:28.294941  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.301042  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.311401  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.331741  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.372728  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.453148  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.613664  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:28.934793  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:01:29.575330  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3fa12d2f-a26c-4bdb-9ea8-7ef918e4897f] Running
E0109 00:01:30.856494  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003969017s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003293 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845373 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [95b0e06a-0d5e-468e-b4d8-6753f2117435] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [95b0e06a-0d5e-468e-b4d8-6753f2117435] Running
E0109 00:01:33.416999  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004616273s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845373 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-003293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0109 00:01:37.765820  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:37.771079  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:37.781340  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:37.802448  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:37.842790  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:37.923159  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:01:38.083721  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-003293 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.180985304s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-003293 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-845373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0109 00:01:39.044863  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-845373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.115661365s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-845373 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-378213 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc00608b-e824-495e-9f13-9ddfe68e8c7b] Pending
helpers_test.go:344: "busybox" [bc00608b-e824-495e-9f13-9ddfe68e8c7b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc00608b-e824-495e-9f13-9ddfe68e8c7b] Running
E0109 00:02:09.258098  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004872905s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-378213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-378213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-378213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047076733s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-378213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce0bd577-8a0e-4801-bd3b-190307b70852] Pending
helpers_test.go:344: "busybox" [ce0bd577-8a0e-4801-bd3b-190307b70852] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce0bd577-8a0e-4801-bd3b-190307b70852] Running
E0109 00:02:50.218579  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005537535s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-834116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-834116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070373491s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-834116 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (787.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-003293 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-003293 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m7.333103865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003293 -n old-k8s-version-003293
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (787.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (895.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-845373 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 00:04:12.138989  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-845373 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m54.876620124s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845373 -n embed-certs-845373
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (895.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (926.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-378213 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0109 00:04:45.236476  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:04:50.356743  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-378213 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (15m25.918619308s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-378213 -n no-preload-378213
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (926.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (556.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-834116 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 00:05:28.030170  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.035677  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.046016  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.066406  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.106719  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.187176  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.347703  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:28.668365  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:29.309494  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:30.589666  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:33.150666  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:38.271686  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:44.249581  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:05:48.511908  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:05:49.610569  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0109 00:06:02.038396  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:06:08.992146  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:06:13.677499  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0109 00:06:19.857403  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:06:28.295142  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:06:37.766513  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:06:49.952615  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:06:55.979273  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:07:05.449857  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:07:06.170825  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:07:20.222824  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:07:23.959607  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:07:47.905835  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:08:11.872857  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:08:36.013500  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:09:03.697978  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:09:19.627705  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:09:22.327512  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:09:40.117055  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:09:50.012090  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:10:07.799839  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
E0109 00:10:28.030212  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:10:49.611040  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0109 00:10:55.713125  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/bridge-976891/client.crt: no such file or directory
E0109 00:11:13.677259  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/functional-483810/client.crt: no such file or directory
E0109 00:11:28.295408  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/kindnet-976891/client.crt: no such file or directory
E0109 00:11:37.766597  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/auto-976891/client.crt: no such file or directory
E0109 00:12:12.661552  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
E0109 00:12:20.222927  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/calico-976891/client.crt: no such file or directory
E0109 00:13:36.012869  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/custom-flannel-976891/client.crt: no such file or directory
E0109 00:14:19.628019  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/ingress-addon-legacy-132808/client.crt: no such file or directory
E0109 00:14:22.327315  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/enable-default-cni-976891/client.crt: no such file or directory
E0109 00:14:40.117184  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/flannel-976891/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-834116 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m16.001852652s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-834116 -n default-k8s-diff-port-834116
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (556.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-745275 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-745275 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m5.539135681s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-745275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-745275 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.660779154s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-745275 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-745275 --alsologtostderr -v=3: (11.283241477s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-745275 -n newest-cni-745275
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-745275 -n newest-cni-745275: exit status 7 (83.839861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-745275 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-745275 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0109 00:30:49.610314  407094 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-399915/.minikube/profiles/addons-910124/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-745275 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (47.172087614s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-745275 -n newest-cni-745275
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-745275 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-745275 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-745275 -n newest-cni-745275
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-745275 -n newest-cni-745275: exit status 2 (257.575321ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-745275 -n newest-cni-745275
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-745275 -n newest-cni-745275: exit status 2 (257.651582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-745275 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-745275 -n newest-cni-745275
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-745275 -n newest-cni-745275
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    

Test skip (39/306)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
52 TestDockerFlags 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/DockerEnv 0
110 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestGvisorAddon 0
159 TestImageBuild 0
192 TestKicCustomNetwork 0
193 TestKicExistingNetwork 0
194 TestKicCustomSubnet 0
195 TestKicStaticIP 0
227 TestChangeNoneUser 0
230 TestScheduledStopWindows 0
232 TestSkaffold 0
234 TestInsufficientStorage 0
238 TestMissingContainerUpgrade 0
242 TestNetworkPlugins/group/kubenet 4.01
251 TestNetworkPlugins/group/cilium 4.11
257 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-976891 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-976891" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-976891

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-976891"

                                                
                                                
----------------------- debugLogs end: kubenet-976891 [took: 3.84908075s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-976891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-976891
--- SKIP: TestNetworkPlugins/group/kubenet (4.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-976891 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-976891" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-976891

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-976891" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-976891"

                                                
                                                
----------------------- debugLogs end: cilium-976891 [took: 3.931557227s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-976891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-976891
--- SKIP: TestNetworkPlugins/group/cilium (4.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-566492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-566492
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard